NASA Astrophysics Data System (ADS)
Astuti, Ani Budi; Iriawan, Nur; Irhamah, Kuswanto, Heri
2017-12-01
In the Bayesian mixture modeling requires stages the identification number of the most appropriate mixture components thus obtained mixture models fit the data through data driven concept. Reversible Jump Markov Chain Monte Carlo (RJMCMC) is a combination of the reversible jump (RJ) concept and the Markov Chain Monte Carlo (MCMC) concept used by some researchers to solve the problem of identifying the number of mixture components which are not known with certainty number. In its application, RJMCMC using the concept of the birth/death and the split-merge with six types of movement, that are w updating, θ updating, z updating, hyperparameter β updating, split-merge for components and birth/death from blank components. The development of the RJMCMC algorithm needs to be done according to the observed case. The purpose of this study is to know the performance of RJMCMC algorithm development in identifying the number of mixture components which are not known with certainty number in the Bayesian mixture modeling for microarray data in Indonesia. The results of this study represent that the concept RJMCMC algorithm development able to properly identify the number of mixture components in the Bayesian normal mixture model wherein the component mixture in the case of microarray data in Indonesia is not known for certain number.
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348
GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.
Mukherjee, Chiranjit; Rodriguez, Abel
2016-01-01
Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.
ERIC Educational Resources Information Center
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Theory and simulation of electrolyte mixtures
NASA Astrophysics Data System (ADS)
Lee, B. Hribar; Vlachy, V.; Bhuiyan, L. B.; Outhwaite, C. W.; Molero, M.
Monte Carlo simulation and theoretical results on some aspects of thermodynamics of mixtures of electrolytes with a common species are presented. Both charge symmetric mixtures, where ions differ only in size, and charge asymmetric but size symmetric mixtures at ionic strength ranging generally from I = 10-4 to 1.0 M, and in a few cases up to I = M, are examined. The theoretical methods explored are: (i) the symmetric Poisson-Boltzmann theory, (ii) the modified Poisson-Boltzmann theory and (iii) the hypernetted-chain integral equation. The first two electrolyte mixing coefficients w0 and w1 of the various mixtures are calculated from an accurate determination of their osmotic pressure data. The theories are seen to be consistent among themselves, and with certain limiting laws in the literature, in predicting the trends of the mixing coefficients with respect to ionic strength. Some selected relevant experimental data have been analysed and compared with the theoretical and simulation trends. In addition the mean activity coefficients for a model mimicking the mixture of KCl and KF electrolytes are calculated and hence the Harned coefficients obtained for this system. These calculations are compared with the experimental data and Monte Carlo results available in the literature. The theoretically predicted Harned coefficients are in good agreement with the simulation results for the model KCl-KF mixture.
Communication: Modeling electrolyte mixtures with concentration dependent dielectric permittivity
NASA Astrophysics Data System (ADS)
Chen, Hsieh; Panagiotopoulos, Athanassios Z.
2018-01-01
We report a new implicit-solvent simulation model for electrolyte mixtures based on the concept of concentration dependent dielectric permittivity. A combining rule is found to predict the dielectric permittivity of electrolyte mixtures based on the experimentally measured dielectric permittivity for pure electrolytes as well as the mole fractions of the electrolytes in mixtures. Using grand canonical Monte Carlo simulations, we demonstrate that this approach allows us to accurately reproduce the mean ionic activity coefficients of NaCl in NaCl-CaCl2 mixtures at ionic strengths up to I = 3M. These results are important for thermodynamic studies of geologically relevant brines and physiological fluids.
Binary gas mixture adsorption-induced deformation of microporous carbons by Monte Carlo simulation.
Cornette, Valeria; de Oliveira, J C Alexandre; Yelpo, Víctor; Azevedo, Diana; López, Raúl H
2018-07-15
Considering the thermodynamic grand potential for more than one adsorbate in an isothermal system, we generalize the model of adsorption-induced deformation of microporous carbons developed by Kowalczyk et al. [1]. We report a comprehensive study of the effects of adsorption-induced deformation of carbonaceous amorphous porous materials due to adsorption of carbon dioxide, methane and their mixtures. The adsorption process is simulated by using the Grand Canonical Monte Carlo (GCMC) method and the calculations are then used to analyze experimental isotherms for the pure gases and mixtures with different molar fraction in the gas phase. The pore size distribution determined from an experimental isotherm is used for predicting the adsorption-induced deformation of both pure gases and their mixtures. The volumetric strain (ε) predictions from the GCMC method are compared against relevant experiments with good agreement found in the cases of pure gases. Copyright © 2018 Elsevier Inc. All rights reserved.
Developing model asphalt systems using molecular simulation : final model.
DOT National Transportation Integrated Search
2009-09-01
Computer based molecular simulations have been used towards developing simple mixture compositions whose : physical properties resemble those of real asphalts. First, Monte Carlo simulations with the OPLS all-atom force : field were used to predict t...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
Monte Carlo study of four dimensional binary hard hypersphere mixtures
NASA Astrophysics Data System (ADS)
Bishop, Marvin; Whitlock, Paula A.
2012-01-01
A multithreaded Monte Carlo code was used to study the properties of binary mixtures of hard hyperspheres in four dimensions. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, 0.6, and 0.8. Many total densities of the binary mixtures were investigated. The pair correlation functions and the equations of state were determined and compared with other simulation results and theoretical predictions. At lower diameter ratios the pair correlation functions of the mixture agree with the pair correlation function of a one component fluid at an appropriately scaled density. The theoretical results for the equation of state compare well to the Monte Carlo calculations for all but the highest densities studied.
NASA Astrophysics Data System (ADS)
Koger, B.; Kirkby, C.
2016-03-01
Gold nanoparticles (GNPs) have shown potential in recent years as a means of therapeutic dose enhancement in radiation therapy. However, a major challenge in moving towards clinical implementation is the exact characterisation of the dose enhancement they provide. Monte Carlo studies attempt to explore this property, but they often face computational limitations when examining macroscopic scenarios. In this study, a method of converting dose from macroscopic simulations, where the medium is defined as a mixture containing both gold and tissue components, to a mean dose-to-tissue on a microscopic scale was established. Monte Carlo simulations were run for both explicitly-modeled GNPs in tissue and a homogeneous mixture of tissue and gold. A dose ratio was obtained for the conversion of dose scored in a mixture medium to dose-to-tissue in each case. Dose ratios varied from 0.69 to 1.04 for photon sources and 0.97 to 1.03 for electron sources. The dose ratio is highly dependent on the source energy as well as GNP diameter and concentration, though this effect is less pronounced for electron sources. By appropriately weighting the monoenergetic dose ratios obtained, the dose ratio for any arbitrary spectrum can be determined. This allows complex scenarios to be modeled accurately without explicitly simulating each individual GNP.
Dependence on sphere size of the phase behavior of mixtures of rods and spheres
NASA Astrophysics Data System (ADS)
Urakami, Naohito; Imai, Masayuki
2003-07-01
By the addition of chondroitin sulfate (Chs) to the aqueous suspension of tobacco mosaic virus (TMV), the aggregation of TMV occurs at very dilute TMV concentration compared with the addition of polyethylene oxide (PEO). The difference of physical behavior between Chs and PEO is the chain conformation in solution. The Chs chain has a semirigid nature, whereas the PEO chain has a flexible nature. In this study, the Chs and PEO chains are simplified to spherical particles having different size, and we use the spherocylinder model for TMV particle. The effect of the sphere size on the phase behaviors in the mixtures of rods and spheres is investigated by Monte Carlo simulations. By the addition of small spheres, the system transforms from the miscible isotropic phase to the miscible nematic phase. On the other hand, by the addition of large spheres, the system changes from the miscible isotropic phase to the immiscible nematic phase through the immiscible isotropic phase. The different phase behaviors between the small and the large spheres originate from the difference of overlapping volume of the depletion zone. In addition, we perform the Monte Carlo simulations in the case that semirigid chains are used as the Chs chain models. The same phase behaviors are observed as the mixtures of rods and large spheres. Thus the sphere model captures the phase behaviors of rod and polymer mixture systems.
New approach in direct-simulation of gas mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren
1991-01-01
Results are reported for an investigation of a new direct-simulation Monte Carlo method by which energy transfer and chemical reactions are calculated. The new method, which reduces to the variable cross-section hard sphere model as a special case, allows different viscosity-temperature exponents for each species in a gas mixture when combined with a modified Larsen-Borgnakke phenomenological model. This removes the most serious limitation of the usefulness of the model for engineering simulations. The necessary kinetic theory for the application of the new method to mixtures of monatomic or polyatomic gases is presented, including gas mixtures involving chemical reactions. Calculations are made for the relaxation of a diatomic gas mixture, a plane shock wave in a gas mixture, and a chemically reacting gas flow along the stagnation streamline in front of a hypersonic vehicle. Calculated results show that the introduction of different molecular interactions for each species in a gas mixture produces significant differences in comparison with a common molecular interaction for all species in the mixture. This effect should not be neglected for accurate DSMC simulations in an engineering context.
The structure of liquid water by polarized neutron diffraction and reverse Monte Carlo modelling.
Temleitner, László; Pusztai, László; Schweika, Werner
2007-08-22
The coherent static structure factor of water has been investigated by polarized neutron diffraction. Polarization analysis allows us to separate the huge incoherent scattering background from hydrogen and to obtain high quality data of the coherent scattering from four different mixtures of liquid H(2)O and D(2)O. The information obtained by the variation of the scattering contrast confines the configurational space of water and is used by the reverse Monte Carlo technique to model the total structure factors. Structural characteristics have been calculated directly from the resulting sets of particle coordinates. Consistency with existing partial pair correlation functions, derived without the application of polarized neutrons, was checked by incorporating them into our reverse Monte Carlo calculations. We also performed Monte Carlo simulations of a hard sphere system, which provides an accurate estimate of the information content of the measured data. It is shown that the present combination of polarized neutron scattering and reverse Monte Carlo structural modelling is a promising approach towards a detailed understanding of the microscopic structure of water.
A New LES/PDF Method for Computational Modeling of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Muradoglu, Metin; Pope, Stephen B.
2013-11-01
A new LES/PDF method is developed for computational modeling of turbulent reacting flows. The open source package, OpenFOAM, is adopted as the LES solver and combined with the particle-based Monte Carlo method to solve the LES/PDF model equations. The dynamic Smagorinsky model is employed to account for the subgrid-scale motions. The LES solver is first validated for the Sandia Flame D using a steady flamelet method in which the chemical compositions, density and temperature fields are parameterized by the mean mixture fraction and its variance. In this approach, the modeled transport equations for the mean mixture fraction and the square of the mixture fraction are solved and the variance is then computed from its definition. The results are found to be in a good agreement with the experimental data. Then the LES solver is combined with the particle-based Monte Carlo algorithm to form a complete solver for the LES/PDF model equations. The in situ adaptive tabulation (ISAT) algorithm is incorporated into the LES/PDF method for efficient implementation of detailed chemical kinetics. The LES/PDF method is also applied to the Sandia Flame D using the GRI-Mech 3.0 chemical mechanism and the results are compared with the experimental data and the earlier PDF simulations. The Scientific and Technical Research Council of Turkey (TUBITAK), Grant No. 111M067.
NASA Astrophysics Data System (ADS)
Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng
2016-11-01
The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.
Evaluating differential effects using regression interactions and regression mixture models
Van Horn, M. Lee; Jaki, Thomas; Masyn, Katherine; Howe, George; Feaster, Daniel J.; Lamont, Andrea E.; George, Melissa R. W.; Kim, Minjung
2015-01-01
Research increasingly emphasizes understanding differential effects. This paper focuses on understanding regression mixture models, a relatively new statistical methods for assessing differential effects by comparing results to using an interactive term in linear regression. The research questions which each model answers, their formulation, and their assumptions are compared using Monte Carlo simulations and real data analysis. The capabilities of regression mixture models are described and specific issues to be addressed when conducting regression mixtures are proposed. The paper aims to clarify the role that regression mixtures can take in the estimation of differential effects and increase awareness of the benefits and potential pitfalls of this approach. Regression mixture models are shown to be a potentially effective exploratory method for finding differential effects when these effects can be defined by a small number of classes of respondents who share a typical relationship between a predictor and an outcome. It is also shown that the comparison between regression mixture models and interactions becomes substantially more complex as the number of classes increases. It is argued that regression interactions are well suited for direct tests of specific hypotheses about differential effects and regression mixtures provide a useful approach for exploring effect heterogeneity given adequate samples and study design. PMID:26556903
NASA Astrophysics Data System (ADS)
Capdeville, H.; Pédoussat, C.; Pitchford, L. C.
2002-02-01
The work presented in the article is a study of the heavy particle (ion and neutral) energy flux distributions to the cathode in conditions typical of discharges used for luminous signs for advertising ("neon" signs). The purpose of this work is to evaluate the effect of the gas mixture on the sputtering of the cathode. We have combined two models for this study: a hybrid model of the electrical properties of the cathode region of a glow discharge and a Monte Carlo simulation of the heavy particle trajectories. Using known sputtering yields for Ne, Ar, and Xe on iron cathodes, we estimate the sputtered atom flux for mixtures of Ar/Ne and Xe/Ne as a function of the percent neon in the mixture.
Monte Carlo simulation of star/linear and star/star blends with chemically identical monomers
NASA Astrophysics Data System (ADS)
Theodorakis, P. E.; Avgeropoulos, A.; Freire, J. J.; Kosmas, M.; Vlahos, C.
2007-11-01
The effects of chain size and architectural asymmetry on the miscibility of blends with chemically identical monomers, differing only in their molecular weight and architecture, are studied via Monte Carlo simulation by using the bond fluctuation model. Namely, we consider blends composed of linear/linear, star/linear and star/star chains. We found that linear/linear blends are more miscible than the corresponding star/star mixtures. In star/linear blends, the increase in the volume fraction of the star chains increases the miscibility. For both star/linear and star/star blends, the miscibility decreases with the increase in star functionality. When we increase the molecular weight of linear chains of star/linear mixtures the miscibility decreases. Our findings are compared with recent analytical and experimental results.
NASA Technical Reports Server (NTRS)
Hubbard, W. B.; Dewitt, H. E.
1985-01-01
A model free energy is presented which accurately represents results from 45 high-precision Monte Carlo calculations of the thermodynamics of hydrogen-helium mixtures at pressures of astrophysical and planetophysical interest. The free energy is calculated using free-electron perturbation theory (dielectric function theory), and is an extension of the expression given in an earlier paper in this series. However, it fits the Monte Carlo results more accurately, and is valid for the full range of compositions from pure hydrogen to pure helium. Using the new free energy, the phase diagram of mixtures of liquid metallic hydrogen and helium is calculated and compared with earlier results. Sample results for mixing volumes are also presented, and the new free energy expression is used to compute a theoretical Jovian adiabat and compare the adiabat with results from three-dimensional Thomas-Fermi-Dirac theory. The present theory gives slightly higher densities at pressures of about 10 megabars.
NASA Astrophysics Data System (ADS)
Errington, Jeffrey Richard
This work focuses on the development of intermolecular potential models for real fluids. United-atom models have been developed for both non-polar and polar fluids. The models have been optimized to the vapor-liquid coexistence properties. Histogram reweighting techniques were used to calculate phase behavior. The Hamiltonian scaling grand canonical Monte Carlo method was developed to enable the determination of thermodynamic properties of several related Hamiltonians from a single simulation. With this method, the phase behavior of variations of the Buckingham exponential-6 potential was determined. Reservoir grand canonical Monte Carlo simulations were developed to simulate molecules with complex architectures and/or stiff intramolecular constraints. The scheme is based on the creation of a reservoir of ideal chains from which structures are selected for insertion during a simulation. New intermolecular potential models have been developed for water, the n-alkane homologous series, benzene, cyclohexane, carbon dioxide, ammonia and methanol. The models utilize the Buckingham exponential-6 potential to model non-polar interactions and point charges to describe polar interactions. With the exception of water, the new models reproduce experimental saturated densities, vapor pressures and critical parameters to within a few percent. In the case of water, we found a set of parameters that describes the phase behavior better than other available point charge models while giving a reasonable description of the liquid structure. The mixture behavior of water-hydrocarbon mixtures has also been examined. The Henry's law constants of methane, ethane, benzene and cyclohexane in water were determined using Widom insertion and expanded ensemble techniques. In addition the high-pressure phase behavior of water-methane and water-ethane systems was studied using the Gibbs ensemble method. The results from this study indicate that it is possible to obtain a good description of the phase behavior of pure components using united-atom models. The mixture behavior of non-polar systems, including highly asymmetric components, was in good agreement with experiment. The calculations for the highly non-ideal water-hydrocarbon mixtures reproduced experimental behavior with varying degrees of success. The results indicate that multibody effects, such as polarizability, must be taken into account when modeling mixtures of polar and non-polar components.
NASA Astrophysics Data System (ADS)
Santos-Filho, J. B.; Plascak, J. A.
2017-09-01
The X Y vectorial generalization of the Blume-Emery-Griffiths (X Y -VBEG) model, which is suitable to be applied to the study of 3He-4He mixtures, is treated on thin films structure and its thermodynamical properties are analyzed as a function of the film thickness. We employ extensive and up-to-date Monte Carlo simulations consisting of hybrid algorithms combining lattice-gas moves, Metropolis, Wolff, and super-relaxation procedures to overcome the critical slowing down and correlations among different spin configurations of the system. We also make use of single histogram techniques to get the behavior of the thermodynamical quantities close to the corresponding transition temperatures. Thin films of the X Y -VBEG model present a quite rich phase diagram with Berezinskii-Kosterlitz-Thouless (BKT) transitions, BKT endpoints, and isolated critical points. As one varies the impurity concentrations along the layers, and in the limit of infinite film thickness, there is a coalescence of the BKT transition endpoint and the isolated critical point into a single, unique tricritical point. In addition, when mimicking the behavior of thin films of 3He-4He mixtures, one obtains that the concentration of 3He atoms decreases from the outer layers to the inner layers of the film, meaning that the superfluid particles tend to locate in the bulk of the system.
Coarse-Grained Molecular Monte Carlo Simulations of Liquid Crystal-Nanoparticle Mixtures
NASA Astrophysics Data System (ADS)
Neufeld, Ryan; Kimaev, Grigoriy; Fu, Fred; Abukhdeir, Nasser M.
Coarse-grained intermolecular potentials have proven capable of capturing essential details of interactions between complex molecules, while substantially reducing the number of degrees of freedom of the system under study. In the domain of liquid crystals, the Gay-Berne (GB) potential has been successfully used to model the behavior of rod-like and disk-like mesogens. However, only ellipsoid-like interaction potentials can be described with GB, making it a poor fit for many real-world mesogens. In this work, the results of Monte Carlo simulations of liquid crystal domains using the Zewdie-Corner (ZC) potential are presented. The ZC potential is constructed from an orthogonal series of basis functions, allowing for potentials of essentially arbitrary shapes to be modeled. We also present simulations of mixtures of liquid crystalline mesogens with nanoparticles. Experimentally these mixtures have been observed to exhibit microphase separation and formation of long-range networks under some conditions. This highlights the need for a coarse-grained approach which can capture salient details on the molecular scale while simulating sufficiently large domains to observe these phenomena. We compare the phase behavior of our simulations with that of a recently presented continuum theory. This work was made possible by the Natural Sciences and Engineering Research Council of Canada and Compute Ontario.
Using a multinomial tree model for detecting mixtures in perceptual detection
Chechile, Richard A.
2014-01-01
In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741
NASA Astrophysics Data System (ADS)
Rabie, M.; Franck, C. M.
2016-06-01
We present a freely available MATLAB code for the simulation of electron transport in arbitrary gas mixtures in the presence of uniform electric fields. For steady-state electron transport, the program provides the transport coefficients, reaction rates and the electron energy distribution function. The program uses established Monte Carlo techniques and is compatible with the electron scattering cross section files from the open-access Plasma Data Exchange Project LXCat. The code is written in object-oriented design, allowing the tracing and visualization of the spatiotemporal evolution of electron swarms and the temporal development of the mean energy and the electron number due to attachment and/or ionization processes. We benchmark our code with well-known model gases as well as the real gases argon, N2, O2, CF4, SF6 and mixtures of N2 and O2.
Monte Carlo simulation of two-component bilayers: DMPC/DSPC mixtures.
Sugár, I P; Thompson, T E; Biltonen, R L
1999-01-01
In this paper, we describe a relatively simple lattice model of a two-component, two-state phospholipid bilayer. Application of Monte Carlo methods to this model permits simulation of the observed excess heat capacity versus temperature curves of dimyristoylphosphatidylcholine (DMPC)/distearoylphosphatidylcholine (DSPC) mixtures as well as the lateral distributions of the components and properties related to these distributions. The analysis of the bilayer energy distribution functions reveals that the gel-fluid transition is a continuous transition for DMPC, DSPC, and all DMPC/DSPC mixtures. A comparison of the thermodynamic properties of DMPC/DSPC mixtures with the configurational properties shows that the temperatures characteristics of the configurational properties correlate well with the maxima in the excess heat capacity curves rather than with the onset and completion temperatures of the gel-fluid transition. In the gel-fluid coexistence region, we also found excellent agreement between the threshold temperatures at different system compositions detected in fluorescence recovery after photobleaching experiments and the temperatures at which the percolation probability of the gel clusters is 0.36. At every composition, the calculated mole fraction of gel state molecules at the fluorescence recovery after photobleaching threshold is 0.34 and, at the percolation threshold of gel clusters, it is 0.24. The percolation threshold mole fraction of gel or fluid lipid depends on the packing geometry of the molecules and the interchain interactions. However, it is independent of temperature, system composition, and state of the percolating cluster. PMID:10096905
Force field development with GOMC, a fast new Monte Carlo molecular simulation code
NASA Astrophysics Data System (ADS)
Mick, Jason Richard
In this work GOMC (GPU Optimized Monte Carlo) a new fast, flexible, and free molecular Monte Carlo code for the simulation atomistic chemical systems is presented. The results of a large Lennard-Jonesium simulation in the Gibbs ensemble is presented. Force fields developed using the code are also presented. To fit the models a quantitative fitting process is outlined using a scoring function and heat maps. The presented n-6 force fields include force fields for noble gases and branched alkanes. These force fields are shown to be the most accurate LJ or n-6 force fields to date for these compounds, capable of reproducing pure fluid behavior and binary mixture behavior to a high degree of accuracy.
Infinite von Mises-Fisher Mixture Modeling of Whole Brain fMRI Data.
Røge, Rasmus E; Madsen, Kristoffer H; Schmidt, Mikkel N; Mørup, Morten
2017-10-01
Cluster analysis of functional magnetic resonance imaging (fMRI) data is often performed using gaussian mixture models, but when the time series are standardized such that the data reside on a hypersphere, this modeling assumption is questionable. The consequences of ignoring the underlying spherical manifold are rarely analyzed, in part due to the computational challenges imposed by directional statistics. In this letter, we discuss a Bayesian von Mises-Fisher (vMF) mixture model for data on the unit hypersphere and present an efficient inference procedure based on collapsed Markov chain Monte Carlo sampling. Comparing the vMF and gaussian mixture models on synthetic data, we demonstrate that the vMF model has a slight advantage inferring the true underlying clustering when compared to gaussian-based models on data generated from both a mixture of vMFs and a mixture of gaussians subsequently normalized. Thus, when performing model selection, the two models are not in agreement. Analyzing multisubject whole brain resting-state fMRI data from healthy adult subjects, we find that the vMF mixture model is considerably more reliable than the gaussian mixture model when comparing solutions across models trained on different groups of subjects, and again we find that the two models disagree on the optimal number of components. The analysis indicates that the fMRI data support more than a thousand clusters, and we confirm this is not a result of overfitting by demonstrating better prediction on data from held-out subjects. Our results highlight the utility of using directional statistics to model standardized fMRI data and demonstrate that whole brain segmentation of fMRI data requires a very large number of functional units in order to adequately account for the discernible statistical patterns in the data.
Wan, Wai-Yin; Chan, Jennifer S K
2009-08-01
For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).
Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.
2009-01-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.
Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.
1997-03-01
There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.
Structure and Thermodynamics of Polyolefin Melts
NASA Astrophysics Data System (ADS)
Weinhold, J. D.; Curro, J. G.; Habenschuss, A.; Londono, J. D.
1997-03-01
Subtle differences in the intermolecular packing of various polyolefins can create dissimilar permeability and mixing behavior. We have used a combination of the Polymer Reference Interaction Site Model (PRISM) and Monte Carlo simulation to study the structural and thermodynamic properties of realistic models for polyolefins. Results for polyisobutylene and syndiotactic polypropylene will be presented along with comparisons to wide-angle x-ray scattering experiments and properties determined from previous studies of polyethylene and isotactic polypropylene. Our technique uses a Monte Carlo simulation on an isolated molecule to determine the polymer's intramolecular structure. With this information, PRISM theory can predict the intermolecular packing for any liquid density and/or mixture composition in a computationally efficient manner. This approach will then be used to explore the mixing behavior of these polyolefins.
ERIC Educational Resources Information Center
Donoghue, John R.
A Monte Carlo study compared the usefulness of six variable weighting methods for cluster analysis. Data were 100 bivariate observations from 2 subgroups, generated according to a finite normal mixture model. Subgroup size, within-group correlation, within-group variance, and distance between subgroup centroids were manipulated. Of the clustering…
Internal structure of shock waves in disparate mass mixtures
NASA Technical Reports Server (NTRS)
Chung, Chan-Hong; De Witt, Kenneth J.; Jeng, Duen-Ren; Penko, Paul F.
1992-01-01
The detailed flow structure of a normal shock wave for a gas mixture is investigated using the direct-simulation Monte Carlo method. A variable diameter hard-sphere (VDHS) model is employed to investigate the effect of different viscosity temperature exponents (VTE) for each species in a gas mixture. Special attention is paid to the irregular behavior in the density profiles which was previously observed in a helium-xenon experiment. It is shown that the VTE can have substantial effects in the prediction of the structure of shock waves. The variable hard-sphere model of Bird shows good agreement, but with some limitations, with the experimental data if a common VTE is chosen properly for each case. The VDHS model shows better agreement with the experimental data without adjusting the VTE. The irregular behavior of the light-gas component in shock waves of disparate mass mixtures is observed not only in the density profile, but also in the parallel temperature profile. The strength of the shock wave, the type of molecular interactions, and the mole fraction of heavy species have substantial effects on the existence and structure of the irregularities.
Lattice model for water-solute mixtures.
Furlan, A P; Almarza, N G; Barbosa, M C
2016-10-14
A lattice model for the study of mixtures of associating liquids is proposed. Solvent and solute are modeled by adapting the associating lattice gas (ALG) model. The nature of interaction of solute/solvent is controlled by tuning the energy interactions between the patches of ALG model. We have studied three set of parameters, resulting in, hydrophilic, inert, and hydrophobic interactions. Extensive Monte Carlo simulations were carried out, and the behavior of pure components and the excess properties of the mixtures have been studied. The pure components, water (solvent) and solute, have quite similar phase diagrams, presenting gas, low density liquid, and high density liquid phases. In the case of solute, the regions of coexistence are substantially reduced when compared with both the water and the standard ALG models. A numerical procedure has been developed in order to attain series of results at constant pressure from simulations of the lattice gas model in the grand canonical ensemble. The excess properties of the mixtures, volume and enthalpy as the function of the solute fraction, have been studied for different interaction parameters of the model. Our model is able to reproduce qualitatively well the excess volume and enthalpy for different aqueous solutions. For the hydrophilic case, we show that the model is able to reproduce the excess volume and enthalpy of mixtures of small alcohols and amines. The inert case reproduces the behavior of large alcohols such as propanol, butanol, and pentanol. For the last case (hydrophobic), the excess properties reproduce the behavior of ionic liquids in aqueous solution.
Neelon, Brian; Gelfand, Alan E.; Miranda, Marie Lynn
2013-01-01
Summary Researchers in the health and social sciences often wish to examine joint spatial patterns for two or more related outcomes. Examples include infant birth weight and gestational length, psychosocial and behavioral indices, and educational test scores from different cognitive domains. We propose a multivariate spatial mixture model for the joint analysis of continuous individual-level outcomes that are referenced to areal units. The responses are modeled as a finite mixture of multivariate normals, which accommodates a wide range of marginal response distributions and allows investigators to examine covariate effects within subpopulations of interest. The model has a hierarchical structure built at the individual level (i.e., individuals are nested within areal units), and thus incorporates both individual- and areal-level predictors as well as spatial random effects for each mixture component. Conditional autoregressive (CAR) priors on the random effects provide spatial smoothing and allow the shape of the multivariate distribution to vary flexibly across geographic regions. We adopt a Bayesian modeling approach and develop an efficient Markov chain Monte Carlo model fitting algorithm that relies primarily on closed-form full conditionals. We use the model to explore geographic patterns in end-of-grade math and reading test scores among school-age children in North Carolina. PMID:26401059
NASA Astrophysics Data System (ADS)
Mazzola, Guglielmo; Helled, Ravit; Sorella, Sandro
2018-01-01
Understanding planetary interiors is directly linked to our ability of simulating exotic quantum mechanical systems such as hydrogen (H) and hydrogen-helium (H-He) mixtures at high pressures and temperatures. Equation of state (EOS) tables based on density functional theory are commonly used by planetary scientists, although this method allows only for a qualitative description of the phase diagram. Here we report quantum Monte Carlo (QMC) molecular dynamics simulations of pure H and H-He mixture. We calculate the first QMC EOS at 6000 K for a H-He mixture of a protosolar composition, and show the crucial influence of He on the H metallization pressure. Our results can be used to calibrate other EOS calculations and are very timely given the accurate determination of Jupiter's gravitational field from the NASA Juno mission and the effort to determine its structure.
Mukherjee, Lipi; Zhai, Peng-Wang; Hu, Yongxiang; Winker, David M.
2018-01-01
Polarized radiation fields in a turbid medium are influenced by single-scattering properties of scatterers. It is common that media contain two or more types of scatterers, which makes it essential to properly mix single-scattering properties of different types of scatterers in the vector radiative transfer theory. The vector radiative transfer solvers can be divided into two basic categories: the stochastic and deterministic methods. The stochastic method is basically the Monte Carlo method, which can handle scatterers with different scattering properties explicitly. This mixture scheme is called the external mixture scheme in this paper. The deterministic methods, however, can only deal with a single set of scattering properties in the smallest discretized spatial volume. The single-scattering properties of different types of scatterers have to be averaged before they are input to deterministic solvers. This second scheme is called the internal mixture scheme. The equivalence of these two different mixture schemes of scattering properties has not been demonstrated so far. In this paper, polarized radiation fields for several scattering media are solved using the Monte Carlo and successive order of scattering (SOS) methods and scattering media contain two types of scatterers: Rayleigh scatterers (molecules) and Mie scatterers (aerosols). The Monte Carlo and SOS methods employ external and internal mixture schemes of scatterers, respectively. It is found that the percentage differences between radiances solved by these two methods with different mixture schemes are of the order of 0.1%. The differences of Q/I, U/I, and V/I are of the order of 10−5 ~ 10−4, where I, Q, U, and V are the Stokes parameters. Therefore, the equivalence between these two mixture schemes is confirmed to the accuracy level of the radiative transfer numerical benchmarks. This result provides important guidelines for many radiative transfer applications that involve the mixture of different scattering and absorptive particles. PMID:29047543
NASA Astrophysics Data System (ADS)
Marshall, Bennett D.; Chapman, Walter G.
2013-09-01
In this work we develop a new theory to model self assembling mixtures of single patch colloids and colloids with spherically symmetric attractions. In the development of the theory we restrict the interactions such that there are short ranged attractions between patchy and spherically symmetric colloids, but patchy colloids do not attract patchy colloids and spherically symmetric colloids do not attract spherically symmetric colloids. This results in the temperature, density, and composition dependent reversible self assembly of the mixture into colloidal star molecules. This type of mixture has been recently synthesized by grafting of complimentary single stranded DNA [L. Feng, R. Dreyfus, R. Sha, N. C. Seeman, and P. M. Chaikin, Adv. Mater. 25(20), 2779-2783 (2013)], 10.1002/adma.201204864. As a quantitative test of the theory, we perform new monte carlo simulations to study the self assembly of these mixtures; theory and simulation are found to be in excellent agreement.
Estimating Mixture of Gaussian Processes by Kernel Smoothing
Huang, Mian; Li, Runze; Wang, Hansheng; Yao, Weixin
2014-01-01
When the functional data are not homogeneous, e.g., there exist multiple classes of functional curves in the dataset, traditional estimation methods may fail. In this paper, we propose a new estimation procedure for the Mixture of Gaussian Processes, to incorporate both functional and inhomogeneous properties of the data. Our method can be viewed as a natural extension of high-dimensional normal mixtures. However, the key difference is that smoothed structures are imposed for both the mean and covariance functions. The model is shown to be identifiable, and can be estimated efficiently by a combination of the ideas from EM algorithm, kernel regression, and functional principal component analysis. Our methodology is empirically justified by Monte Carlo simulations and illustrated by an analysis of a supermarket dataset. PMID:24976675
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Premixing quality and flame stability: A theoretical and experimental study
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.; Heywood, J. B.; Tabaczynski, R. J.
1979-01-01
Models for predicting flame ignition and blowout in a combustor primary zone are presented. A correlation for the blowoff velocity of premixed turbulent flames is developed using the basic quantities of turbulent flow, and the laminar flame speed. A statistical model employing a Monte Carlo calculation procedure is developed to account for nonuniformities in a combustor primary zone. An overall kinetic rate equation is used to describe the fuel oxidation process. The model is used to predict the lean ignition and blow out limits of premixed turbulent flames; the effects of mixture nonuniformity on the lean ignition limit are explored using an assumed distribution of fuel-air ratios. Data on the effects of variations in inlet temperature, reference velocity and mixture uniformity on the lean ignition and blowout limits of gaseous propane-air flames are presented.
NASA Astrophysics Data System (ADS)
Benstâali, W.; Harrache, Z.; Belasri, A.
2012-06-01
Plasma display panels (PDPs) are one of the leading technologies in the flat panels market. However, they are facing intense competition. Different fluid models, both one-dimensional (1D) and 2D, have been used to analyze the energy balance in PDP cells in order to find out how the xenon excitation part can be improved to optimize the luminous efficiency. The aim of this work is to present a 1D particle-in-cell with Monte Carlo collision (PIC-MCC) model for PDPs. The discharge takes place in a Xe10-Ne gas mixture at 560 Torr. The applied voltage is 381 V. We show at first that this model reproduces the electric characteristics of a single PDP discharge pulse. Then, we calculate the energy deposited by charged particles in each collision. The total energy is about 19 μJ cm-2, and the energy used in xenon excitation is of the order of 12.5% compared to the total energy deposited in the discharge. The effect of xenon content in a Xe-Ne mixture is also analyzed. The energies deposited in xenon excitation and ionization are more important when the xenon percentage has been increased from 1 to 30%. The applied voltage increases the energy deposited in xenon excitation.
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.
Campbell, Kieran R; Yau, Christopher
2017-03-15
Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.
Deruytter, David; Baert, Jan M; Nevejan, Nancy; De Schamphelaere, Karel A C; Janssen, Colin R
2017-12-01
Little is known about the effect of metal mixtures on marine organisms, especially after exposure to environmentally realistic concentrations. This information is, however, required to evaluate the need to include mixtures in future environmental risk assessment procedures. We assessed the effect of copper (Cu)-Nickel (Ni) binary mixtures on Mytilus edulis larval development using a full factorial design that included environmentally relevant metal concentrations and ratios. The reproducibility of the results was assessed by repeating this experiment 5 times. The observed mixture effects were compared with the effects predicted with the concentration addition model. Deviations from the concentration addition model were estimated using a Markov chain Monte-Carlo algorithm. This enabled the accurate estimation of the deviations and their uncertainty. The results demonstrated reproducibly that the type of interaction-synergism or antagonism-mainly depended on the Ni concentration. Antagonism was observed at high Ni concentrations, whereas synergism occurred at Ni concentrations as low as 4.9 μg Ni/L. This low (and realistic) Ni concentration was 1% of the median effective concentration (EC50) of Ni or 57% of the Ni predicted-no-effect concentration (PNEC) in the European Union environmental risk assessment. It is concluded that results from mixture studies should not be extrapolated to concentrations or ratios other than those investigated and that significant mixture interactions can occur at environmentally realistic concentrations. This should be accounted for in (marine) environmental risk assessment of metals. Environ Toxicol Chem 2017;36:3471-3479. © 2017 SETAC. © 2017 SETAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; ...
2016-01-19
An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less
Molecular simulation of water removal from simple gases with zeolite NaA.
Csányi, Eva; Ható, Zoltán; Kristóf, Tamás
2012-06-01
Water vapor removal from some simple gases using zeolite NaA was studied by molecular simulation. The equilibrium adsorption properties of H(2)O, CO, H(2), CH(4) and their mixtures in dehydrated zeolite NaA were computed by grand canonical Monte Carlo simulations. The simulations employed Lennard-Jones + Coulomb type effective pair potential models, which are suitable for the reproduction of thermodynamic properties of pure substances. Based on the comparison of the simulation results with experimental data for single-component adsorption at different temperatures and pressures, a modified interaction potential model for the zeolite is proposed. In the adsorption simulations with mixtures presented here, zeolite exhibits extremely high selectivity of water to the investigated weakly polar/non-polar gases demonstrating the excellent dehydration ability of zeolite NaA in engineering applications.
Chiappini, Massimiliano; Eiser, Erika; Sciortino, Francesco
2017-01-01
A new gel-forming colloidal system based on a binary mixture of fd-viruses and gold nanoparticles functionalized with complementary DNA single strands has been recently introduced. Upon quenching below the DNA melt temperature, such a system results in a highly porous gel state, that may be developed in a new functional material of tunable porosity. In order to shed light on the gelation mechanism, we introduce a model closely mimicking the experimental one and we explore via Monte Carlo simulations its equilibrium phase diagram. Specifically, we model the system as a binary mixture of hard rods and hard spheres mutually interacting via a short-range square-well attractive potential. In the experimental conditions, we find evidence of a phase separation occurring either via nucleation-and-growth or via spinodal decomposition. The spinodal decomposition leads to the formation of small clusters of bonded rods and spheres whose further diffusion and aggregation leads to the formation of a percolating network in the system. Our results are consistent with the hypothesis that the mixture of DNA-coated fd-viruses and gold nanoparticles undergoes a non-equilibrium gelation via an arrested spinodal decomposition mechanism.
NASA Astrophysics Data System (ADS)
Wang, Dong; Tse, Peter W.
2015-05-01
Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Ferrando, Nicolas; Lachet, Véronique; Boutin, Anne
2010-07-08
Ketone and aldehyde molecules are involved in a large variety of industrial applications. Because they are mainly present mixed with other compounds, the prediction of phase equilibrium of mixtures involving these classes of molecules is of first interest particularly to design and optimize separation processes. The main goal of this work is to propose a transferable force field for ketones and aldehydes that allows accurate molecular simulations of not only pure compounds but also complex mixtures. The proposed force field is based on the anisotropic united-atoms AUA4 potential developed for hydrocarbons, and it introduces only one new atom, the carbonyl oxygen. The Lennard-Jones parameters of this oxygen atom have been adjusted on saturated thermodynamic properties of both acetone and acetaldehyde. To simulate mixtures, Monte Carlo simulations are carried out in a specific pseudoensemble which allows a direct calculation of the bubble pressure. For polar mixtures involved in this study, we show that this approach is an interesting alternative to classical calculations in the isothermal-isobaric Gibbs ensemble. The pressure-composition diagrams of polar + polar and polar + nonpolar binary mixtures are well reproduced. Mutual solubilities as well as azeotrope location, if present, are accurately predicted without any empirical binary interaction parameters or readjustment. Such result highlights the transferability of the proposed force field, which is an essential feature toward the simulation of complex oxygenated mixtures of industrial interest.
Compressible or incompressible blend of interacting monodisperse linear polymers near a surface.
Batman, Richard; Gujrati, P D
2007-08-28
We consider a lattice model of a mixture of repulsive, attractive, or neutral monodisperse linear polymers of two species, A and B, with a third monomeric species C, which may be taken to represent free volume. The mixture is confined between two hard, parallel plates of variable separation whose interactions with A and C may be attractive, repulsive, or neutral, and may be different from each other. The interactions with A and C are all that are required to completely specify the effect of each surface on all three components. We numerically study various density profiles as we move away from the surface, by using the recursive method of Gujrati and Chhajer [J. Chem. Phys. 106, 5599 (1997)] that has already been previously applied to study polydisperse solutions and blends next to surfaces. The resulting density profiles show the oscillations that are seen in Monte Carlo simulations and the enrichment of the smaller species at a neutral surface. The method is computationally ultrafast and can be carried out on a personal computer (PC), even in the incompressible case, when Monte Carlo simulations are not feasible. The calculations of density profiles usually take less than 20 min on a PC.
NASA Astrophysics Data System (ADS)
Gámez, Francisco; Acemel, Rafael D.; Cuetos, Alejandro
2013-10-01
Parsons-Lee approach is formulated for the isotropic-nematic transition in a binary mixture of oblate hard spherocylinders and hard spheres. Results for the phase coexistence and for the equation of state in both phases for fluids with different relative size and composition ranges are presented. The predicted behaviour is in agreement with Monte Carlo simulations in a qualitative fashion. The study serves to provide a rational view of how to control key aspects of the behaviour of these binary nematogenic colloidal systems. This behaviour can be tuned with an appropriate choice of the relative size and molar fractions of the depleting particles. In general, the mixture of discotic and spherical particles is stable against demixing up to very high packing fractions. We explore in detail the narrow geometrical range where demixing is predicted to be possible in the isotropic phase. The influence of molecular crowding effects on the stability of the mixture when spherical molecules are added to a system of discotic colloids is also studied.
3D PIC-MCC simulations of discharge inception around a sharp anode in nitrogen/oxygen mixtures
NASA Astrophysics Data System (ADS)
Teunissen, Jannis; Ebert, Ute
2016-08-01
We investigate how photoionization, electron avalanches and space charge affect the inception of nanosecond pulsed discharges. Simulations are performed with a 3D PIC-MCC (particle-in-cell, Monte Carlo collision) model with adaptive mesh refinement for the field solver. This model, whose source code is available online, is described in the first part of the paper. Then we present simulation results in a needle-to-plane geometry, using different nitrogen/oxygen mixtures at atmospheric pressure. In these mixtures non-local photoionization is important for the discharge growth. The typical length scale for this process depends on the oxygen concentration. With 0.2% oxygen the discharges grow quite irregularly, due to the limited supply of free electrons around them. With 2% or more oxygen the development is much smoother. An almost spherical ionized region can form around the electrode tip, which increases in size with the electrode voltage. Eventually this inception cloud destabilizes into streamer channels. In our simulations, discharge velocities are almost independent of the oxygen concentration. We discuss the physical mechanisms behind these phenomena and compare our simulations with experimental observations.
Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee
2015-01-01
Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512
NASA Astrophysics Data System (ADS)
Akashi, Haruaki; Sasaki, K.; Yoshinaga, T.
2011-10-01
Recently, plasma-assisted combustion has been focused on for achieving more efficient combustion way of fossil fuels, reducing pollutants and so on. Shinohara et al has reported that the flame length of methane and air premixed burner shortened by irradiating microwave power without increase of gas temperature. This suggests that electrons heated by microwave electric field assist the combustion. They also measured emission from 2nd Positive Band System (2nd PBS) of nitrogen during the irradiation. To clarify this mechanism, electron behavior under microwave power should be examined. To obtain electron transport parameters, electron Monte Carlo simulations in methane and air mixture gas have been done. A simple model has been developed to simulate inside the flame. To make this model simple, some assumptions are made. The electrons diffuse from the combustion plasma region. And the electrons quickly reach their equilibrium state. And it is found that the simulated emission from 2nd PBS agrees with the experimental result. Recently, plasma-assisted combustion has been focused on for achieving more efficient combustion way of fossil fuels, reducing pollutants and so on. Shinohara et al has reported that the flame length of methane and air premixed burner shortened by irradiating microwave power without increase of gas temperature. This suggests that electrons heated by microwave electric field assist the combustion. They also measured emission from 2nd Positive Band System (2nd PBS) of nitrogen during the irradiation. To clarify this mechanism, electron behavior under microwave power should be examined. To obtain electron transport parameters, electron Monte Carlo simulations in methane and air mixture gas have been done. A simple model has been developed to simulate inside the flame. To make this model simple, some assumptions are made. The electrons diffuse from the combustion plasma region. And the electrons quickly reach their equilibrium state. And it is found that the simulated emission from 2nd PBS agrees with the experimental result. This work was supported by KAKENHI (22340170).
What are hierarchical models and how do we analyze them?
Royle, Andy
2016-01-01
In this chapter we provide a basic definition of hierarchical models and introduce the two canonical hierarchical models in this book: site occupancy and N-mixture models. The former is a hierarchical extension of logistic regression and the latter is a hierarchical extension of Poisson regression. We introduce basic concepts of probability modeling and statistical inference including likelihood and Bayesian perspectives. We go through the mechanics of maximizing the likelihood and characterizing the posterior distribution by Markov chain Monte Carlo (MCMC) methods. We give a general perspective on topics such as model selection and assessment of model fit, although we demonstrate these topics in practice in later chapters (especially Chapters 5, 6, 7, and 10 Chapter 5 Chapter 6 Chapter 7 Chapter 10)
NASA Astrophysics Data System (ADS)
Khezripour, S.; Negarestani, A.; Rezaie, M. R.
2017-08-01
Micromegas detector has recently been used for high-energy neutron (HEN) detection, but the aim of this research is to investigate the response of the Micromegas detector to low-energy neutron (LEN). For this purpose, a Micromegas detector (with air, P10, BF3, 3He and Ar/BF3 mixture) was optimized for the detection of 60 keV neutrons using the MCNP (Monte Carlo N Particle) code. The simulation results show that the optimum thickness of the cathode is 1 mm and the optimum of microgrid location is 100 μm above the anode. The output current of this detector for Ar (3%) + BF3 (97%) mixture is greater than the other ones. This mixture is considered as the appropriate gas for the Micromegas neutron detector providing the output current for 60 keV neutrons at the level of 97.8 nA per neutron. Consecuently, this detector can be introduced as LEN detector.
Variability-aware compact modeling and statistical circuit validation on SRAM test array
NASA Astrophysics Data System (ADS)
Qiao, Ying; Spanos, Costas J.
2016-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.
Lachet, V; Teuler, J-M; Rousseau, B
2015-01-08
A classical all-atoms force field for molecular simulations of hydrofluorocarbons (HFCs) has been developed. Lennard-Jones force centers plus point charges are used to represent dispersion-repulsion and electrostatic interactions. Parametrization of this force field has been performed iteratively using three target properties of pentafluorobutane: the quantum energy of an isolated molecule, the dielectric constant in the liquid phase, and the compressed liquid density. The accuracy and transferability of this new force field has been demonstrated through the simulation of different thermophysical properties of several fluorinated compounds, showing significant improvements compared to existing models. This new force field has been applied to study solubilities of several gases in poly(vinylidene fluoride) (PVDF) above the melting temperature of this polymer. The solubility of CH4, CO2, H2S, H2, N2, O2, and H2O at infinite dilution has been computed using test particle insertions in the course of a NpT hybrid Monte Carlo simulation. For CH4, CO2, and their mixtures, some calculations beyond the Henry regime have also been performed using hybrid Monte Carlo simulations in the osmotic ensemble, allowing both swelling and solubility determination. An ideal mixing behavior is observed, with identical solubility coefficients in the mixtures and in pure gas systems.
Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee
2015-01-01
In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916
Automatic detection of key innovations, rate shifts, and diversity-dependence on phylogenetic trees.
Rabosky, Daniel L
2014-01-01
A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes.
Automatic Detection of Key Innovations, Rate Shifts, and Diversity-Dependence on Phylogenetic Trees
Rabosky, Daniel L.
2014-01-01
A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes. PMID:24586858
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Kim, Eun Sook; Wang, Yan
2017-01-01
Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691
An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression
Weiss, Brandi A.; Dardick, William
2015-01-01
This article introduces an entropy-based measure of data–model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data–model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data–model fit to assess how well logistic regression models classify cases into observed categories. PMID:29795897
An Entropy-Based Measure for Assessing Fuzziness in Logistic Regression.
Weiss, Brandi A; Dardick, William
2016-12-01
This article introduces an entropy-based measure of data-model fit that can be used to assess the quality of logistic regression models. Entropy has previously been used in mixture-modeling to quantify how well individuals are classified into latent classes. The current study proposes the use of entropy for logistic regression models to quantify the quality of classification and separation of group membership. Entropy complements preexisting measures of data-model fit and provides unique information not contained in other measures. Hypothetical data scenarios, an applied example, and Monte Carlo simulation results are used to demonstrate the application of entropy in logistic regression. Entropy should be used in conjunction with other measures of data-model fit to assess how well logistic regression models classify cases into observed categories.
Bayesian mixture analysis for metagenomic community profiling.
Morfopoulou, Sofia; Plagnol, Vincent
2015-09-15
Deep sequencing of clinical samples is now an established tool for the detection of infectious pathogens, with direct medical applications. The large amount of data generated produces an opportunity to detect species even at very low levels, provided that computational tools can effectively profile the relevant metagenomic communities. Data interpretation is complicated by the fact that short sequencing reads can match multiple organisms and by the lack of completeness of existing databases, in particular for viral pathogens. Here we present metaMix, a Bayesian mixture model framework for resolving complex metagenomic mixtures. We show that the use of parallel Monte Carlo Markov chains for the exploration of the species space enables the identification of the set of species most likely to contribute to the mixture. We demonstrate the greater accuracy of metaMix compared with relevant methods, particularly for profiling complex communities consisting of several related species. We designed metaMix specifically for the analysis of deep transcriptome sequencing datasets, with a focus on viral pathogen detection; however, the principles are generally applicable to all types of metagenomic mixtures. metaMix is implemented as a user friendly R package, freely available on CRAN: http://cran.r-project.org/web/packages/metaMix sofia.morfopoulou.10@ucl.ac.uk Supplementary data are available at Bionformatics online. © The Author 2015. Published by Oxford University Press.
Breakdown and Limit of Continuum Diffusion Velocity for Binary Gas Mixtures from Direct Simulation
NASA Astrophysics Data System (ADS)
Martin, Robert Scott; Najmabadi, Farrokh
2011-05-01
This work investigates the breakdown of the continuum relations for diffusion velocity in inert binary gas mixtures. Values of the relative diffusion velocities for components of a gas mixture may be calculated using of Chapman-Enskog theory and occur not only due to concentration gradients, but also pressure and temperature gradients in the flow as described by Hirschfelder. Because Chapman-Enskog theory employs a linear perturbation around equilibrium, it is expected to break down when the velocity distribution deviates significantly from equilibrium. This breakdown of the overall flow has long been an area of interest in rarefied gas dynamics. By comparing the continuum values to results from Bird's DS2V Monte Carlo code, we propose a new limit on the continuum approach specific to binary gases. To remove the confounding influence of an inconsistent molecular model, we also present the application of the variable hard sphere (VSS) model used in DS2V to the continuum diffusion velocity calculation. Fitting sample asymptotic curves to the breakdown, a limit, Vmax, that is a fraction of an analytically derived limit resulting from the kinetic temperature of the mixture is proposed. With an expected deviation of only 2% between the physical values and continuum calculations within ±Vmax/4, we suggest this as a conservative estimate on the range of applicability for the continuum theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chremos, Alexandros, E-mail: achremos@imperial.ac.uk; Nikoubashman, Arash, E-mail: arashn@princeton.edu; Panagiotopoulos, Athanassios Z.
In this contribution, we develop a coarse-graining methodology for mapping specific block copolymer systems to bead-spring particle-based models. We map the constituent Kuhn segments to Lennard-Jones particles, and establish a semi-empirical correlation between the experimentally determined Flory-Huggins parameter χ and the interaction of the model potential. For these purposes, we have performed an extensive set of isobaric–isothermal Monte Carlo simulations of binary mixtures of Lennard-Jones particles with the same size but with asymmetric energetic parameters. The phase behavior of these monomeric mixtures is then extended to chains with finite sizes through theoretical considerations. Such a top-down coarse-graining approach is importantmore » from a computational point of view, since many characteristic features of block copolymer systems are on time and length scales which are still inaccessible through fully atomistic simulations. We demonstrate the applicability of our method for generating parameters by reproducing the morphology diagram of a specific diblock copolymer, namely, poly(styrene-b-methyl methacrylate), which has been extensively studied in experiments.« less
Troitzsch, Raphael Z.; Tulip, Paul R.; Crain, Jason; Martyna, Glenn J.
2008-01-01
Aqueous proline solutions are deceptively simple as they can take on complex roles such as protein chaperones, cryoprotectants, and hydrotropic agents in biological processes. Here, a molecular level picture of proline/water mixtures is developed. Car-Parrinello ab initio molecular dynamics (CPAIMD) simulations of aqueous proline amino acid at the B-LYP level of theory, performed using IBM's Blue Gene/L supercomputer and massively parallel software, reveal hydrogen-bonding propensities that are at odds with the predictions of the CHARMM22 empirical force field but are in better agreement with results of recent neutron diffraction experiments. In general, the CPAIMD (B-LYP) simulations predict a simplified structural model of proline/water mixtures consisting of fewer distinct local motifs. Comparisons of simulation results to experiment are made by direct evaluation of the neutron static structure factor S(Q) from CPAIMD (B-LYP) trajectories as well as to the results of the empirical potential structure refinement reverse Monte Carlo procedure applied to the neutron data. PMID:18790850
Troitzsch, Raphael Z; Tulip, Paul R; Crain, Jason; Martyna, Glenn J
2008-12-01
Aqueous proline solutions are deceptively simple as they can take on complex roles such as protein chaperones, cryoprotectants, and hydrotropic agents in biological processes. Here, a molecular level picture of proline/water mixtures is developed. Car-Parrinello ab initio molecular dynamics (CPAIMD) simulations of aqueous proline amino acid at the B-LYP level of theory, performed using IBM's Blue Gene/L supercomputer and massively parallel software, reveal hydrogen-bonding propensities that are at odds with the predictions of the CHARMM22 empirical force field but are in better agreement with results of recent neutron diffraction experiments. In general, the CPAIMD (B-LYP) simulations predict a simplified structural model of proline/water mixtures consisting of fewer distinct local motifs. Comparisons of simulation results to experiment are made by direct evaluation of the neutron static structure factor S(Q) from CPAIMD (B-LYP) trajectories as well as to the results of the empirical potential structure refinement reverse Monte Carlo procedure applied to the neutron data.
Analysis of Spin Financial Market by GARCH Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2013-08-01
A spin model is used for simulations of financial markets. To determine return volatility in the spin financial market we use the GARCH model often used for volatility estimation in empirical finance. We apply the Bayesian inference performed by the Markov Chain Monte Carlo method to the parameter estimation of the GARCH model. It is found that volatility determined by the GARCH model exhibits "volatility clustering" also observed in the real financial markets. Using volatility determined by the GARCH model we examine the mixture-of-distribution hypothesis (MDH) suggested for the asset return dynamics. We find that the returns standardized by volatility are approximately standard normal random variables. Moreover we find that the absolute standardized returns show no significant autocorrelation. These findings are consistent with the view of the MDH for the return dynamics.
Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.
Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry
2016-09-01
Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Jawad, Enas A.
2018-05-01
In this paper, The Monte Carlo simulation program has been used to calculation the electron energy distribution function (EEDF) and electric transport parameters for the gas mixtures of The trif leoroiodo methane (CF3I) ‘environment friendly’ with a noble gases (Argon, Helium, kryptos, Neon and Xenon). The electron transport parameters are assessed in the range of E/N (E is the electric field and N is the gas number density of background gas molecules) between 100 to 2000Td (1 Townsend = 10-17 V cm2) at room temperature. These parameters, namely are electron mean energy (ε), the density –normalized longitudinal diffusion coefficient (NDL) and the density –normalized mobility (μN). In contrast, the impact of CF3I in the noble gases mixture is strongly apparent in the values for the electron mean energy, the density –normalized longitudinal diffusion coefficient and the density –normalized mobility. Note in the results of the calculation agreed well with the experimental results.
Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee
2016-06-01
Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.
Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model
Ellefsen, Karl J.; Smith, David
2016-01-01
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Poludniowski, Gavin G; Evans, Philip M
2007-06-01
The penetration characteristics of electron beams into x-ray targets are investigated for incident electron kinetic energies in the range 50-150 keV. The frequency densities of electrons penetrating to a depth x in a target, with a fraction of initial kinetic energy, u, are calculated using Monte Carlo methods for beam energies of 50, 80, 100, 120 and 150 keV in a tungsten target. The frequency densities for 100 keV electrons in Al, Mo and Re targets are also calculated. A mixture of simple modeling with equations and interpolation from data is used to generalize the calculations in tungsten. Where possible, parameters derived from the Monte Carlo data are compared to experimental measurements. Previous electron transport approximations in the semiempirical models of other authors are discussed and related to this work. In particular, the crudity of the use of the Thomson-Whiddington law to describe electron penetration and energy loss is highlighted. The results presented here may be used towards calculating the target self-attenuation correction for bremsstrahlung photons emitted within a tungsten target.
Boncina, M; Rescic, J; Kalyuzhnyi, Yu V; Vlachy, V
2007-07-21
The depletion interaction between proteins caused by addition of either uncharged or partially charged oligomers was studied using the canonical Monte Carlo simulation technique and the integral equation theory. A protein molecule was modeled in two different ways: either as (i) a hard sphere of diameter 30.0 A with net charge 0, or +5, or (ii) as a hard sphere with discrete charges (depending on the pH of solution) of diameter 45.4 A. The oligomers were pictured as tangentially jointed, uncharged, or partially charged, hard spheres. The ions of a simple electrolyte present in solution were represented by charged hard spheres distributed in the dielectric continuum. In this study we were particularly interested in changes of the protein-protein pair-distribution function, caused by addition of the oligomer component. In agreement with previous studies we found that addition of a nonadsorbing oligomer reduces the phase stability of solution, which is reflected in the shape of the protein-protein pair-distribution function. The value of this function in protein-protein contact increases with increasing oligomer concentration, and is larger for charged oligomers. The range of the depletion interaction and its strength also depend on the length (number of monomer units) of the oligomer chain. The integral equation theory, based on the Wertheim Ornstein-Zernike approach applied in this study, was found to be in fair agreement with Monte Carlo results only for very short oligomers. The computer simulations for a model mimicking the lysozyme molecule (ii) are in qualitative agreement with small-angle neutron experiments for lysozyme-dextran mixtures.
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
Flue gas adsorption by single-wall carbon nanotubes: A Monte Carlo study.
Romero-Hermida, M I; Romero-Enrique, J M; Morales-Flórez, V; Esquivias, L
2016-08-21
Adsorption of flue gases by single-wall carbon nanotubes (SWCNT) has been studied by means of Monte Carlo simulations. The flue gas is modeled as a ternary mixture of N2, CO2, and O2, emulating realistic compositions of the emissions from power plants. The adsorbed flue gas is in equilibrium with a bulk gas characterized by temperature T, pressure p, and mixture composition. We have considered different SWCNTs with different chiralities and diameters in a range between 7 and 20 Å. Our results show that the CO2 adsorption properties depend mainly on the bulk flue gas thermodynamic conditions and the SWCNT diameter. Narrow SWCNTs with diameter around 7 Å show high CO2 adsorption capacity and selectivity, but they decrease abruptly as the SWCNT diameter is increased. For wide SWCNT, CO2 adsorption capacity and selectivity, much smaller in value than for the narrow case, decrease mildly with the SWCNT diameter. In the intermediate range of SWCNT diameters, the CO2 adsorption properties may show a peculiar behavior, which depend strongly on the bulk flue gas conditions. Thus, for high bulk CO2 concentrations and low temperatures, the CO2 adsorption capacity remains high in a wide range of SWCNT diameters, although the corresponding selectivity is moderate. We correlate these findings with the microscopic structure of the adsorbed gas inside the SWCNTs.
NASA Astrophysics Data System (ADS)
Miftahurrohmah, Brina; Iriawan, Nur; Fithriasari, Kartika
2017-06-01
Stocks are known as the financial instruments traded in the capital market which have a high level of risk. Their risks are indicated by their uncertainty of their return which have to be accepted by investors in the future. The higher the risk to be faced, the higher the return would be gained. Therefore, the measurements need to be made against the risk. Value at Risk (VaR) as the most popular risk measurement method, is frequently ignore when the pattern of return is not uni-modal Normal. The calculation of the risks using VaR method with the Normal Mixture Autoregressive (MNAR) approach has been considered. This paper proposes VaR method couple with the Mixture Laplace Autoregressive (MLAR) that would be implemented for analysing the first three biggest capitalization Islamic stock return in JII, namely PT. Astra International Tbk (ASII), PT. Telekomunikasi Indonesia Tbk (TLMK), and PT. Unilever Indonesia Tbk (UNVR). Parameter estimation is performed by employing Bayesian Markov Chain Monte Carlo (MCMC) approaches.
Fast mix table construction for material discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, S. R.
2013-07-01
An effective hybrid Monte Carlo-deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a 'mix table,' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mixmore » table in O(number of voxels x log number of mixtures) time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation. (authors)« less
Fast Mix Table Construction for Material Discretization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Seth R
2013-01-01
An effective hybrid Monte Carlo--deterministic implementation typically requires the approximation of a continuous geometry description with a discretized piecewise-constant material field. The inherent geometry discretization error can be reduced somewhat by using material mixing, where multiple materials inside a discrete mesh voxel are homogenized. Material mixing requires the construction of a ``mix table,'' which stores the volume fractions in every mixture so that multiple voxels with similar compositions can reference the same mixture. Mix table construction is a potentially expensive serial operation for large problems with many materials and voxels. We formulate an efficient algorithm to construct a sparse mix table inmore » $$O(\\text{number of voxels}\\times \\log \\text{number of mixtures})$$ time. The new algorithm is implemented in ADVANTG and used to discretize continuous geometries onto a structured Cartesian grid. When applied to an end-of-life MCNP model of the High Flux Isotope Reactor with 270 distinct materials, the new method improves the material mixing time by a factor of 100 compared to a naive mix table implementation.« less
Quantifying Ab Initio Equation of State Errors for Hydrogen-Helium Mixtures
NASA Astrophysics Data System (ADS)
Clay, Raymond; Morales, Miguel
2017-06-01
In order to produce predictive models of Jovian planets, an accurate equation of state for hydrogen-helium mixtures is needed over pressure and temperature ranges spanning multiple orders of magnitude. While extensive theoretical work has been done in this area, previous controversies regarding the equation of state of pure hydrogen have demonstrated exceptional sensitivity to approximations commonly employed in ab initio calculations. To this end, we present the results of our quantum Monte Carlo based benchmarking studies for several major classes of density functionals. Additionally, we expand upon our published results by considering the impact that ionic finite size effects and density functional errors translate to errors in the equation of state. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Adsorption of HMF from water/DMSO solutions onto hydrophobic zeolites: experiment and simulation.
Xiong, Ruichang; León, Marta; Nikolakis, Vladimiros; Sandler, Stanley I; Vlachos, Dionisios G
2014-01-01
The adsorption of 5-hydroxymethylfurfural (HMF), DMSO, and water from binary and ternary mixtures in hydrophobic silicalite-1 and dealuminated Y (DAY) zeolites at ambient conditions was studied by experiments and molecular modeling. HMF and DMSO adsorption isotherms were measured and compared to those calculated using a combination of grand canonical Monte Carlo and expanded ensemble (GCMC-EE) simulations. A method based on GCMC-EE simulations for dilute solutions combined with the Redlich-Kister (RK) expansion (GCMC-EE-RK) is introduced to calculate the isotherms over a wide range of concentrations. The simulations, using literature force fields, are in reasonable agreement with experimental data. In HMF/water binary mixtures, large-pore hydrophobic zeolites are much more effective for HMF adsorption but less selective because large pores allow water adsorption because of H2 O-HMF attraction. In ternary HMF/DMSO/water mixtures, HMF loading decreases with increasing DMSO fraction, rendering the separation of HMF from water/DMSO mixtures by adsorption difficult. The ratio of the energetic interaction in the zeolite to the solvation free energy is a key factor in controlling separation from liquid mixtures. Overall, our findings could have an impact on the separation and catalytic conversion of HMF and the rational design of nanoporous adsorbents for liquid-phase separations in biomass processing. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A smooth mixture of Tobits model for healthcare expenditure.
Keane, Michael; Stavrunova, Olena
2011-09-01
This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.
Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I
2018-01-01
Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.
A model for the accurate computation of the lateral scattering of protons in water
NASA Astrophysics Data System (ADS)
Bellinzona, E. V.; Ciocca, M.; Embriaco, A.; Ferrari, A.; Fontana, A.; Mairani, A.; Parodi, K.; Rotondi, A.; Sala, P.; Tessonnier, T.
2016-02-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
A model for the accurate computation of the lateral scattering of protons in water.
Bellinzona, E V; Ciocca, M; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-02-21
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Quantum Phase Transitions in the Bose Hubbard Model and in a Bose-Fermi Mixture
NASA Astrophysics Data System (ADS)
Duchon, Eric Nicholas
Ultracold atomic gases may be the ultimate quantum simulator. These isolated systems have the lowest temperatures in the observable universe, and their properties and interactions can be precisely and accurately tuned across a full spectrum of behaviors, from few-body physics to highly-correlated many-body effects. The ability to impose potentials on and tune interactions within ultracold gases to mimic complex systems mean they could become a theorist's playground. One of their great strengths, however, is also one of the largest obstacles to this dream: isolation. This thesis touches on both of these themes. First, methods to characterize phases and quantum critical points, and to construct finite temperature phase diagrams using experimentally accessible observables in the Bose Hubbard model are discussed. Then, the transition from a weakly to a strongly interacting Bose-Fermi mixture in the continuum is analyzed using zero temperature numerical techniques. Real materials can be emulated by ultracold atomic gases loaded into optical lattice potentials. We discuss the characteristics of a single boson species trapped in an optical lattice (described by the Bose Hubbard model) and the hallmarks of the quantum critical region that separates the superfluid and the Mott insulator ground states. We propose a method to map the quantum critical region using the single, experimentally accessible, local quantity R, the ratio of compressibility to local number fluctuations. The procedure to map a phase diagram with R is easily generalized to inhomogeneous systems and generic many-body Hamiltonians. We illustrate it here using quantum Monte Carlo simulations of the 2D Bose Hubbard model. Secondly, we investigate the transition from a degenerate Fermi gas weakly coupled to a Bose Einstein condensate to the strong coupling limit of composite boson-fermion molecules. We propose a variational wave function to investigate the ground state properties of such a Bose-Fermi mixture with equal population, as a function of increasing attraction between bosons and fermions. The variational wave function captures the weak and the strong coupling limits and at intermediate coupling we make two predictions using zero temperature quantum Monte Carlo methods: (I) a complete destruction of the atomic Fermi surface and emergence of a molecular Fermi sea that coexists with a remnant of the Bose-Einstein condensate, and (II) evidence for enhanced short-ranged fermion-fermion correlations mediated by bosons.
Zhao, Yongliang; Feng, Yanhui; Zhang, Xinxin
2016-09-06
The adsorption and diffusion of the CO2-CH4 mixture in coal and the underlying mechanisms significantly affect the design and operation of any CO2-enhanced coal-bed methane recovery (CO2-ECBM) project. In this study, bituminous coal was fabricated based on the Wiser molecular model and its ultramicroporous parameters were evaluated; molecular simulations were established through Grand Canonical Monte Carlo (GCMC) and Molecular Dynamic (MD) methods to study the effects of temperature, pressure, and species bulk mole fraction on the adsorption isotherms, adsorption selectivity, three distinct diffusion coefficients, and diffusivity selectivity of the binary mixture in the coal ultramicropores. It turns out that the absolute adsorption amount of each species in the mixture decreases as temperature increases, but increases as its own bulk mole fraction increases. The self-, corrected, and transport diffusion coefficients of pure CO2 and pure CH4 all increase as temperature or/and their own bulk mole fractions increase. Compared to CH4, the adsorption and diffusion of CO2 are preferential in the coal ultramicropores. Adsorption selectivity and diffusivity selectivity were simultaneously employed to reveal that the optimal injection depth for CO2-ECBM is 800-1000 m at 308-323 K temperature and 8.0-10.0 MPa.
Rahimi, Mahshid; Singh, Jayant K; Müller-Plathe, Florian
2016-02-07
The adsorption and separation behavior of SO2-CO2, SO2-N2 and CO2-N2 binary mixtures in bundles of aligned double-walled carbon nanotubes is investigated using the grand-canonical Monte Carlo (GCMC) method and ideal adsorbed solution theory. Simulations were performed at 303 K with nanotubes of 3 nm inner diameter and various intertube distances. The results showed that the packing with an intertube distance d = 0 has the highest selectivity for SO2-N2 and CO2-N2 binary mixtures. For the SO2-CO2 case, the optimum intertube distance for having the maximum selectivity depends on the applied pressure, so that at p < 0.8 bar d = 0 shows the highest selectivity and at 0.8 bar < p < 2.5 bar, the highest selectivity belongs to d = 0.5 nm. Ideal adsorbed solution theory cannot predict the adsorption of the binary systems containing SO2, especially when d = 0. As the intertube distance is increased, the ideal adsorbed solution theory based predictions become closer to those of GCMC simulations. Only in the case of CO2-N2, ideal adsorbed solution theory is everywhere in good agreement with simulations. In a ternary mixture of all three gases, the behavior of SO2 and CO2 remains similar to that in a SO2-CO2 binary mixture because of the weak interaction between N2 molecules and CNTs.
NASA Astrophysics Data System (ADS)
Avendaño-Valencia, Luis David; Fassois, Spilios D.
2017-07-01
The study focuses on vibration response based health monitoring for an operating wind turbine, which features time-dependent dynamics under environmental and operational uncertainty. A Gaussian Mixture Model Random Coefficient (GMM-RC) model based Structural Health Monitoring framework postulated in a companion paper is adopted and assessed. The assessment is based on vibration response signals obtained from a simulated offshore 5 MW wind turbine. The non-stationarity in the vibration signals originates from the continually evolving, due to blade rotation, inertial properties, as well as the wind characteristics, while uncertainty is introduced by random variations of the wind speed within the range of 10-20 m/s. Monte Carlo simulations are performed using six distinct structural states, including the healthy state and five types of damage/fault in the tower, the blades, and the transmission, with each one of them characterized by four distinct levels. Random vibration response modeling and damage diagnosis are illustrated, along with pertinent comparisons with state-of-the-art diagnosis methods. The results demonstrate consistently good performance of the GMM-RC model based framework, offering significant performance improvements over state-of-the-art methods. Most damage types and levels are shown to be properly diagnosed using a single vibration sensor.
The Manhattan Frame Model-Manhattan World Inference in the Space of Surface Normals.
Straub, Julian; Freifeld, Oren; Rosman, Guy; Leonard, John J; Fisher, John W
2018-01-01
Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches utilize these regularities via the restrictive, and rather local, Manhattan World (MW) assumption which posits that every plane is perpendicular to one of the axes of a single coordinate system. The aforementioned regularities are especially evident in the surface normal distribution of a scene where they manifest as orthogonally-coupled clusters. This motivates the introduction of the Manhattan-Frame (MF) model which captures the notion of an MW in the surface normals space, the unit sphere, and two probabilistic MF models over this space. First, for a single MF we propose novel real-time MAP inference algorithms, evaluate their performance and their use in drift-free rotation estimation. Second, to capture the complexity of real-world scenes at a global scale, we extend the MF model to a probabilistic mixture of Manhattan Frames (MMF). For MMF inference we propose a simple MAP inference algorithm and an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that let us infer the unknown number of mixture components. We demonstrate the versatility of the MMF model and inference algorithm across several scales of man-made environments.
The virial coefficients of hard hypersphere binary mixtures
NASA Astrophysics Data System (ADS)
Enciso, E.; Almarza, N. G.; Gonzalez, M. A.; Bermejo, F. J.
The third, fourth and fifth virial coefficients of hard hypersphere binary mixtures with dimensionality d = 4, 5 have been calculated for size ratios R ≥0.1, R ı σ22 / σ11 , where σ ii is the diameter of component i . The composition independent partial virial coefficients have been evaluated by Monte Carlo integration of the corresponding Mayer modified star diagrams. The results are compared with the predictions of Santos, S., Yuste, S. B., and Lopez de Haro, M., 1999, Molec. Phys ., 96 , 1 of the equation of state of a multicomponent mixture of hard hyperspheres, and the good agreement gives strong support to the validity of that recipe.
Structure of ternary additive hard-sphere fluid mixtures.
Malijevský, Alexander; Malijevský, Anatol; Yuste, Santos B; Santos, Andrés; López de Haro, Mariano
2002-12-01
Monte Carlo simulations on the structural properties of ternary fluid mixtures of additive hard spheres are reported. The results are compared with those obtained from a recent analytical approximation [S. B. Yuste, A. Santos, and M. López de Haro, J. Chem. Phys. 108, 3683 (1998)] to the radial distribution functions of hard-sphere mixtures and with the results derived from the solution of the Ornstein-Zernike integral equation with both the Martynov-Sarkisov and the Percus-Yevick closures. Very good agreement between the results of the first two approaches and simulation is observed, with a noticeable improvement over the Percus-Yevick predictions especially near contact.
NASA Astrophysics Data System (ADS)
Clarke, Peter; Varghese, Philip; Goldstein, David
2018-01-01
A discrete velocity method is developed for gas mixtures of diatomic molecules with both rotational and vibrational energy states. A full quantized model is described, and rotation-translation and vibration-translation energy exchanges are simulated using a Larsen-Borgnakke exchange model. Elastic and inelastic molecular interactions are modeled during every simulated collision to help produce smooth internal energy distributions. The method is verified by comparing simulations of homogeneous relaxation by our discrete velocity method to numerical solutions of the Jeans and Landau-Teller equations, and to direct simulation Monte Carlo. We compute the structure of a 1D shock using this method, and determine how the rotational energy distribution varies with spatial location in the shock and with position in velocity space.
Response properties in the adsorption-desorption model on a triangular lattice
NASA Astrophysics Data System (ADS)
Šćepanović, J. R.; Stojiljković, D.; Jakšić, Z. M.; Budinski-Petković, Lj.; Vrhovac, S. B.
2016-06-01
The out-of-equilibrium dynamical processes during the reversible random sequential adsorption (RSA) of objects of various shapes on a two-dimensional triangular lattice are studied numerically by means of Monte Carlo simulations. We focused on the influence of the order of symmetry axis of the shape on the response of the reversible RSA model to sudden perturbations of the desorption probability Pd. We provide a detailed discussion of the significance of collective events for governing the time coverage behavior of shapes with different rotational symmetries. We calculate the two-time density-density correlation function C(t ,tw) for various waiting times tw and show that longer memory of the initial state persists for the more symmetrical shapes. Our model displays nonequilibrium dynamical effects such as aging. We find that the correlation function C(t ,tw) for all objects scales as a function of single variable ln(tw) / ln(t) . We also study the short-term memory effects in two-component mixtures of extended objects and give a detailed analysis of the contribution to the densification kinetics coming from each mixture component. We observe the weakening of correlation features for the deposition processes in multicomponent systems.
Pycnonuclear reaction rates for binary ionic mixtures
NASA Technical Reports Server (NTRS)
Ichimaru, S.; Ogata, S.; Van Horn, H. M.
1992-01-01
Through a combination of compositional scaling arguments and examinations of Monte Carlo simulation results for the interparticle separations in binary-ionic mixture (BIM) solids, we have derived parameterized expressions for the BIM pycnonuclear rates as generalizations of those in one-component solids obtained previously by Salpeter and Van Horn and by Ogata et al. We have thereby discovered a catalyzing effect of the heavier elements, which enhances the rates of reactions among the lighter elements when the charge ratio exceeds a critical value of approximately 2.3.
Flue gas adsorption by single-wall carbon nanotubes: A Monte Carlo study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero-Hermida, M. I.; Departamento de Física Condensada, Universidad de Sevilla, Av. Reina Mercedes s/n, 41012 Sevilla; Romero-Enrique, J. M.
Adsorption of flue gases by single-wall carbon nanotubes (SWCNT) has been studied by means of Monte Carlo simulations. The flue gas is modeled as a ternary mixture of N{sub 2}, CO{sub 2}, and O{sub 2}, emulating realistic compositions of the emissions from power plants. The adsorbed flue gas is in equilibrium with a bulk gas characterized by temperature T, pressure p, and mixture composition. We have considered different SWCNTs with different chiralities and diameters in a range between 7 and 20 Å. Our results show that the CO{sub 2} adsorption properties depend mainly on the bulk flue gas thermodynamic conditionsmore » and the SWCNT diameter. Narrow SWCNTs with diameter around 7 Å show high CO{sub 2} adsorption capacity and selectivity, but they decrease abruptly as the SWCNT diameter is increased. For wide SWCNT, CO{sub 2} adsorption capacity and selectivity, much smaller in value than for the narrow case, decrease mildly with the SWCNT diameter. In the intermediate range of SWCNT diameters, the CO{sub 2} adsorption properties may show a peculiar behavior, which depend strongly on the bulk flue gas conditions. Thus, for high bulk CO{sub 2} concentrations and low temperatures, the CO{sub 2} adsorption capacity remains high in a wide range of SWCNT diameters, although the corresponding selectivity is moderate. We correlate these findings with the microscopic structure of the adsorbed gas inside the SWCNTs.« less
CLUSTERING SOUTH AFRICAN HOUSEHOLDS BASED ON THEIR ASSET STATUS USING LATENT VARIABLE MODELS
McParland, Damien; Gormley, Isobel Claire; McCormick, Tyler H.; Clark, Samuel J.; Kabudula, Chodziwadziwa Whiteson; Collinson, Mark A.
2014-01-01
The Agincourt Health and Demographic Surveillance System has since 2001 conducted a biannual household asset survey in order to quantify household socio-economic status (SES) in a rural population living in northeast South Africa. The survey contains binary, ordinal and nominal items. In the absence of income or expenditure data, the SES landscape in the study population is explored and described by clustering the households into homogeneous groups based on their asset status. A model-based approach to clustering the Agincourt households, based on latent variable models, is proposed. In the case of modeling binary or ordinal items, item response theory models are employed. For nominal survey items, a factor analysis model, similar in nature to a multinomial probit model, is used. Both model types have an underlying latent variable structure—this similarity is exploited and the models are combined to produce a hybrid model capable of handling mixed data types. Further, a mixture of the hybrid models is considered to provide clustering capabilities within the context of mixed binary, ordinal and nominal response data. The proposed model is termed a mixture of factor analyzers for mixed data (MFA-MD). The MFA-MD model is applied to the survey data to cluster the Agincourt households into homogeneous groups. The model is estimated within the Bayesian paradigm, using a Markov chain Monte Carlo algorithm. Intuitive groupings result, providing insight to the different socio-economic strata within the Agincourt region. PMID:25485026
Tang, Yongqiang
2018-04-30
The controlled imputation method refers to a class of pattern mixture models that have been commonly used as sensitivity analyses of longitudinal clinical trials with nonignorable dropout in recent years. These pattern mixture models assume that participants in the experimental arm after dropout have similar response profiles to the control participants or have worse outcomes than otherwise similar participants who remain on the experimental treatment. In spite of its popularity, the controlled imputation has not been formally developed for longitudinal binary and ordinal outcomes partially due to the lack of a natural multivariate distribution for such endpoints. In this paper, we propose 2 approaches for implementing the controlled imputation for binary and ordinal data based respectively on the sequential logistic regression and the multivariate probit model. Efficient Markov chain Monte Carlo algorithms are developed for missing data imputation by using the monotone data augmentation technique for the sequential logistic regression and a parameter-expanded monotone data augmentation scheme for the multivariate probit model. We assess the performance of the proposed procedures by simulation and the analysis of a schizophrenia clinical trial and compare them with the fully conditional specification, last observation carried forward, and baseline observation carried forward imputation methods. Copyright © 2018 John Wiley & Sons, Ltd.
Favard, Cyril; Wenger, Jérôme; Lenne, Pierre-François; Rigneault, Hervé
2011-03-02
Many efforts have been undertaken over the last few decades to characterize the diffusion process in model and cellular lipid membranes. One of the techniques developed for this purpose, fluorescence correlation spectroscopy (FCS), has proved to be a very efficient approach, especially if the analysis is extended to measurements on different spatial scales (referred to as FCS diffusion laws). In this work, we examine the relevance of FCS diffusion laws for probing the behavior of a pure lipid and a lipid mixture at temperatures below, within and above the phase transitions, both experimentally and numerically. The accuracy of the microscopic description of the lipid mixtures found here extends previous work to a more complex model in which the geometry is unknown and the molecular motion is driven only by the thermodynamic parameters of the system itself. For multilamellar vesicles of both pure lipid and lipid mixtures, the FCS diffusion laws recorded at different temperatures exhibit large deviations from pure Brownian motion and reveal the existence of nanodomains. The variation of the mean size of these domains with temperature is in perfect correlation with the enthalpy fluctuation. This study highlights the advantages of using FCS diffusion laws in complex lipid systems to describe their temporal and spatial structure. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Peterson, L. E.; Cucinotta, F. A.; Wilson, J. W. (Principal Investigator)
1999-01-01
Estimating uncertainty in lifetime cancer risk for human exposure to space radiation is a unique challenge. Conventional risk assessment with low-linear-energy-transfer (LET)-based risk from Japanese atomic bomb survivor studies may be inappropriate for relativistic protons and nuclei in space due to track structure effects. This paper develops a Monte Carlo mixture model (MCMM) for transferring additive, National Institutes of Health multiplicative, and multiplicative excess cancer incidence risks based on Japanese atomic bomb survivor data to determine excess incidence risk for various US astronaut exposure profiles. The MCMM serves as an anchor point for future risk projection methods involving biophysical models of DNA damage from space radiation. Lifetime incidence risks of radiation-induced cancer for the MCMM based on low-LET Japanese data for nonleukemia (all cancers except leukemia) were 2.77 (90% confidence limit, 0.75-11.34) for males exposed to 1 Sv at age 45 and 2.20 (90% confidence limit, 0.59-10.12) for males exposed at age 55. For females, mixture model risks for nonleukemia exposed separately to 1 Sv at ages of 45 and 55 were 2.98 (90% confidence limit, 0.90-11.70) and 2.44 (90% confidence limit, 0.70-10.30), respectively. Risks for high-LET 200 MeV protons (LET=0.45 keV/micrometer), 1 MeV alpha-particles (LET=100 keV/micrometer), and 600 MeV iron particles (LET=180 keV/micrometer) were scored on a per particle basis by determining the particle fluence required for an average of one particle per cell nucleus of area 100 micrometer(2). Lifetime risk per proton was 2.68x10(-2)% (90% confidence limit, 0.79x10(-3)%-0. 514x10(-2)%). For alpha-particles, lifetime risk was 14.2% (90% confidence limit, 2.5%-31.2%). Conversely, lifetime risk per iron particle was 23.7% (90% confidence limit, 4.5%-53.0%). Uncertainty in the DDREF for high-LET particles may be less than that for low-LET radiation because typically there is very little dose-rate dependence. Probability density functions for high-LET radiation quality and dose-rate may be preferable to conventional risk assessment approaches. Nuclear reactions and track structure effects in tissue may not be properly estimated by existing data using in vitro models for estimating RBEs. The method used here is being extended to estimate uncertainty in spacecraft shielding effectiveness in various space radiation environments.
Molecular simulation of fluid mixtures in bulk and at solid-liquid interfaces
NASA Astrophysics Data System (ADS)
Kern, Jesse L.
The properties of a diverse range of mixture systems at interfaces are investigated using a variety of computational techniques. Molecular simulation is used to examine the thermodynamic, structural, and transport properties of heterogeneous systems of theoretical and practical importance. The study of binary hard-sphere mixtures at a hard wall demonstrates the high accuracy of recently developed classical-density functionals. The study of aluminum--gallium solid--liquid heterogeneous interfaces predicts a significant amount of prefreezing of the liquid by adopting the structure of the solid surface. The study of ethylene-expanded methanol within model silica mesopores shows the effect of confinement and surface functionalzation on the mixture composition and transport inside of the pores. From our molecular-dynamics study of binary hard-sphere fluid mixtures at a hard wall, we obtained high-precision calculations of the wall-fluid interfacial free energies, gamma. We have considered mixtures of varying diameter ratio, alpha = 0.7,0.8,0.9; mole fraction, x 1 = 0.25,0.50,0.75; and packing fraction, eta < 0.50. Using Gibbs-Cahn Integration, gamma is calculated from the system pressure, chemical potentials, and density profiles. Recent classical density-functional theory predictions agree very well with our results. Structural, thermodynamic, and transport properties of the aluminum--gallium solid--liquid interface at 368 K are obtained for the (100), (110), and (111) orientations using molecular dynamics. Density, potential energy, stress, and diffusion profiles perpendicular to the interface are calculated. The layers of Ga that form on the Al surface are strongly adsorbed and take the in-plane structure of the underlying crystal layers for all orientations, which results in significant compressive stress on the Ga atoms. Bulk methanol--ethylene mixtures under vapor-liquid equilibrium conditions have been characterized using Monte Carlo and molecular dynamics. The simulated vapor-liquid coexistence curves for the pure-component and binary mixtures agree well with experiment, as do the mixture volumetric expansion results. Using chemical potentials obtained from the bulk simulations, the filling of a number of model silica mesopores with ethylene and methanol is simulated. We report the compositions of the confined fluid mixtures over a range of pressures and for three degrees of nominal pore hydrophobicity.
Monte Carlo treatment of resonance-radiation imprisonment in fluorescent lamps—revisited
NASA Astrophysics Data System (ADS)
Anderson, James B.
2016-12-01
We reported in 1985 a Monte Carlo treatment of the imprisonment of the 253.7 nm resonance radiation from mercury in the mercury-argon discharge of fluorescent lamps. The calculated spectra of the emitted radiation were found in good agreement with measured spectra. The addition of the isotope mercury-196 to natural mercury was found, also in agreement with experiments, to increase lamp efficiency. In this paper we report the extension of the earlier work with increased accuracy, analysis of photon exit-time distributions, recycling of energy released in quenching, analysis of dynamic similarity for different lamp sizes, variation of Mrozowski transfer rates, prediction and analysis of the hyperfine ultra-violet spectra, and optimization of tailored mercury isotope mixtures for increased lamp efficiency. The spectra were found insensitive to the extent of quenching and recycling. The optimized mixtures were found to increase efficiencies by as much as 5% for several lamp configurations. Optimization without increasing the mercury-196 fraction was found to increase efficiencies by nearly 1% for several configurations.
Microphase Separation in Oil-Water Mixtures Containing Hydrophilic and Hydrophobic Ions
NASA Astrophysics Data System (ADS)
Tasios, Nikos; Samin, Sela; van Roij, René; Dijkstra, Marjolein
2017-11-01
We develop a lattice-based Monte Carlo simulation method for charged mixtures capable of treating dielectric heterogeneities. Using this method, we study oil-water mixtures containing an antagonistic salt, with hydrophilic cations and hydrophobic anions. Our simulations reveal several phases with a spatially modulated solvent composition, in which the ions partition between water-rich and water-poor regions according to their affinity. In addition to the recently observed lamellar phase, we find tubular and droplet phases, reminiscent of those found in block copolymers and surfactant systems. Interestingly, these structures stem from ion-mediated interactions, which allows for tuning of the phase behavior via the concentrations, the ionic properties, and the temperature.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
Mountris, K A; Bert, J; Noailly, J; Aguilera, A Rodriguez; Valeri, A; Pradier, O; Schick, U; Promayon, E; Ballester, M A Gonzalez; Troccaz, J; Visvikis, D
2017-03-21
Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model's computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.
NASA Astrophysics Data System (ADS)
Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.
2017-03-01
Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.
Estimating statistical power for open-enrollment group treatment trials.
Morgan-Lopez, Antonio A; Saavedra, Lissette M; Hien, Denise A; Fals-Stewart, William
2011-01-01
Modeling turnover in group membership has been identified as a key barrier contributing to a disconnect between the manner in which behavioral treatment is conducted (open-enrollment groups) and the designs of substance abuse treatment trials (closed-enrollment groups, individual therapy). Latent class pattern mixture models (LCPMMs) are emerging tools for modeling data from open-enrollment groups with membership turnover in recently proposed treatment trials. The current article illustrates an approach to conducting power analyses for open-enrollment designs based on the Monte Carlo simulation of LCPMM models using parameters derived from published data from a randomized controlled trial comparing Seeking Safety to a Community Care condition for women presenting with comorbid posttraumatic stress disorder and substance use disorders. The example addresses discrepancies between the analysis framework assumed in power analyses of many recently proposed open-enrollment trials and the proposed use of LCPMM for data analysis. Copyright © 2011 Elsevier Inc. All rights reserved.
Spatial generalised linear mixed models based on distances.
Melo, Oscar O; Mateu, Jorge; Melo, Carlos E
2016-10-01
Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.
Electron transport in solid targets and in the active mixture of a CO2 laser amplifier
NASA Astrophysics Data System (ADS)
Galkowski, A.
The paper examines the use of the NIKE code for the Monte Carlo computation of the deposited energy profile and other characteristics of the absorption process of an electron beam in a solid target and the spatial distribution of primary ionization in the active mixture of a CO2 laser amplifier. The problem is considered in connection with the generation of intense electron beams and the acceleration of thin metal foils, as well as in connection with the electric discharge pumping of a CO2 laser amplifier.
Serang, Oliver; Noble, William Stafford
2012-01-01
The problem of identifying the proteins in a complex mixture using tandem mass spectrometry can be framed as an inference problem on a graph that connects peptides to proteins. Several existing protein identification methods make use of statistical inference methods for graphical models, including expectation maximization, Markov chain Monte Carlo, and full marginalization coupled with approximation heuristics. We show that, for this problem, the majority of the cost of inference usually comes from a few highly connected subgraphs. Furthermore, we evaluate three different statistical inference methods using a common graphical model, and we demonstrate that junction tree inference substantially improves rates of convergence compared to existing methods. The python code used for this paper is available at http://noble.gs.washington.edu/proj/fido. PMID:22331862
Phase diagrams of Janus fluids with up-down constrained orientations
NASA Astrophysics Data System (ADS)
Fantoni, Riccardo; Giacometti, Achille; Maestre, Miguel Ángel G.; Santos, Andrés
2013-11-01
A class of binary mixtures of Janus fluids formed by colloidal spheres with the hydrophobic hemispheres constrained to point either up or down are studied by means of Gibbs ensemble Monte Carlo simulations and simple analytical approximations. These fluids can be experimentally realized by the application of an external static electrical field. The gas-liquid and demixing phase transitions in five specific models with different patch-patch affinities are analyzed. It is found that a gas-liquid transition is present in all the models, even if only one of the four possible patch-patch interactions is attractive. Moreover, provided the attraction between like particles is stronger than between unlike particles, the system demixes into two subsystems with different composition at sufficiently low temperatures and high densities.
A kinetic Monte Carlo approach to study fluid transport in pore networks
NASA Astrophysics Data System (ADS)
Apostolopoulou, M.; Day, R.; Hull, R.; Stamatakis, M.; Striolo, A.
2017-10-01
The mechanism of fluid migration in porous networks continues to attract great interest. Darcy's law (phenomenological continuum theory), which is often used to describe macroscopically fluid flow through a porous material, is thought to fail in nano-channels. Transport through heterogeneous and anisotropic systems, characterized by a broad distribution of pores, occurs via a contribution of different transport mechanisms, all of which need to be accounted for. The situation is likely more complicated when immiscible fluid mixtures are present. To generalize the study of fluid transport through a porous network, we developed a stochastic kinetic Monte Carlo (KMC) model. In our lattice model, the pore network is represented as a set of connected finite volumes (voxels), and transport is simulated as a random walk of molecules, which "hop" from voxel to voxel. We simulated fluid transport along an effectively 1D pore and we compared the results to those expected by solving analytically the diffusion equation. The KMC model was then implemented to quantify the transport of methane through hydrated micropores, in which case atomistic molecular dynamic simulation results were reproduced. The model was then used to study flow through pore networks, where it was able to quantify the effect of the pore length and the effect of the network's connectivity. The results are consistent with experiments but also provide additional physical insights. Extension of the model will be useful to better understand fluid transport in shale rocks.
NASA Astrophysics Data System (ADS)
Gómez-Álvarez, Paula; Romaní, Luis; González-Salgado, Diego
2013-05-01
Mixtures containing associated substances show a singular thermodynamic behaviour that has attracted to scientific community during the last century. Particularly, binary systems composed of an associating fluid and an inert solvent, where association occurs only between molecules of the same kind, have been extensively studied. A number of theoretical approaches were used in order to gain insights into the effect of the association on the macroscopic behaviour, especially on the second-order thermodynamic derivatives (or response functions). Curiously, to our knowledge, molecular simulations have not been used to that end despite describing the molecules and their interactions in a more complete and realistic way than theoretical models. With this in mind, a simple methodology developed in the framework of Monte Carlo molecular simulation is used in this work to quantify the association contribution to a wide set of thermodynamic properties for the {methanol + Lennard Jones} specific system under room conditions and throughout the composition range. Special attention was paid to the response functions and their respective excess properties, for which a detailed comparison with selected previous works in the field has been established.
DSMC Shock Simulation of Saturn Entry Probe Conditions
NASA Technical Reports Server (NTRS)
Higdon, Kyle J.; Cruden, Brett A.; Brandis, Aaron; Liechty, Derek S.; Goldstein, David B.; Varghese, Philip L.
2016-01-01
This work describes the direct simulation Monte Carlo (DSMC) investigation of Saturn entry probe scenarios and the influence of non-equilibrium phenomena on Saturn entry conditions. The DSMC simulations coincide with rarefied hypersonic shock tube experiments of a hydrogen-helium mixture performed in the Electric Arc Shock Tube (EAST) at NASA Ames Research Center. The DSMC simulations are post-processed through the NEQAIR line-by-line radiation code to compare directly to the experimental results. Improved collision cross-sections, inelastic collision parameters, and reaction rates are determined for a high temperature DSMC simulation of a 7-species H2-He mixture and an electronic excitation model is implemented in the DSMC code. Simulation results for 27.8 and 27.4 kms shock waves are obtained at 0.2 and 0.1 Torr respectively and compared to measured spectra in the VUV, UV, visible, and IR ranges. These results confirm the persistence of non-equilibrium for several centimeters behind the shock and the diffusion of atomic hydrogen upstream of the shock wave. Although the magnitude of the radiance did not match experiments and an ionization inductance period was not observed in the simulations, the discrepancies indicated where improvements are needed in the DSMC and NEQAIR models.
DSMC Shock Simulation of Saturn Entry Probe Conditions
NASA Technical Reports Server (NTRS)
Higdon, Kyle J.; Cruden, Brett A.; Brandis, Aaron M.; Liechty, Derek S.; Goldstein, David B.; Varghese, Philip L.
2016-01-01
This work describes the direct simulation Monte Carlo (DSMC) investigation of Saturn entry probe scenarios and the influence of non-equilibrium phenomena on Saturn entry conditions. The DSMC simulations coincide with rarefied hypersonic shock tube experiments of a hydrogen-helium mixture performed in the Electric Arc Shock Tube (EAST) at the NASA Ames Research Center. The DSMC simulations are post-processed through the NEQAIR line-by-line radiation code to compare directly to the experimental results. Improved collision cross-sections, inelastic collision parameters, and reaction rates are determined for a high temperature DSMC simulation of a 7-species H2-He mixture and an electronic excitation model is implemented in the DSMC code. Simulation results for 27.8 and 27.4 km/s shock waves are obtained at 0.2 and 0.1 Torr, respectively, and compared to measured spectra in the VUV, UV, visible, and IR ranges. These results confirm the persistence of non-equilibrium for several centimeters behind the shock and the diffusion of atomic hydrogen upstream of the shock wave. Although the magnitude of the radiance did not match experiments and an ionization inductance period was not observed in the simulations, the discrepancies indicated where improvements are needed in the DSMC and NEQAIR models.
Polymer Crowding in Confined Polymer-Nanoparticle Mixtures
NASA Astrophysics Data System (ADS)
Davis, Wyatt J.; Denton, Alan R.
Crowding can influence the conformations and thus functionality of macromolecules in quasi-two-dimensional environments, such as DNA or proteins confined to a cell membrane. We explore such crowding within a model of polymers as penetrable ellipses, whose shapes are governed by the statistics of a 2D random walk. The principal radii of the polymers fluctuate according to probability distributions of the eigenvalues of the gyration tensor. Within this coarse-grained model, we perform Monte Carlo simulations of mixtures of polymers and hard nanodisks, including trial changes in polymer conformation (shape and orientation). Penetration of polymers by nanodisks is incorporated with a free energy cost predicted by polymer field theory. Over ranges of size ratio and nanodisk density, we analyze the influence of crowding on polymer shape by computing eigenvalue distributions, mean radius of gyration, and mean asphericity of the polymer. We compare results with predictions of free-volume theory and with corresponding results in three dimensions. Our approach may help to interpret recent (and motivate future) experimental studies of biopolymers interacting with cell membranes, with relevance for drug delivery and gene therapy. This work was supported by the National Science Foundation under Grant No. DMR-1106331.
Wang, Tingting; Chen, Yi-Ping Phoebe; Bowman, Phil J; Goddard, Michael E; Hayes, Ben J
2016-09-21
Bayesian mixture models in which the effects of SNP are assumed to come from normal distributions with different variances are attractive for simultaneous genomic prediction and QTL mapping. These models are usually implemented with Monte Carlo Markov Chain (MCMC) sampling, which requires long compute times with large genomic data sets. Here, we present an efficient approach (termed HyB_BR), which is a hybrid of an Expectation-Maximisation algorithm, followed by a limited number of MCMC without the requirement for burn-in. To test prediction accuracy from HyB_BR, dairy cattle and human disease trait data were used. In the dairy cattle data, there were four quantitative traits (milk volume, protein kg, fat% in milk and fertility) measured in 16,214 cattle from two breeds genotyped for 632,002 SNPs. Validation of genomic predictions was in a subset of cattle either from the reference set or in animals from a third breeds that were not in the reference set. In all cases, HyB_BR gave almost identical accuracies to Bayesian mixture models implemented with full MCMC, however computational time was reduced by up to 1/17 of that required by full MCMC. The SNPs with high posterior probability of a non-zero effect were also very similar between full MCMC and HyB_BR, with several known genes affecting milk production in this category, as well as some novel genes. HyB_BR was also applied to seven human diseases with 4890 individuals genotyped for around 300 K SNPs in a case/control design, from the Welcome Trust Case Control Consortium (WTCCC). In this data set, the results demonstrated again that HyB_BR performed as well as Bayesian mixture models with full MCMC for genomic predictions and genetic architecture inference while reducing the computational time from 45 h with full MCMC to 3 h with HyB_BR. The results for quantitative traits in cattle and disease in humans demonstrate that HyB_BR can perform equally well as Bayesian mixture models implemented with full MCMC in terms of prediction accuracy, but with up to 17 times faster than the full MCMC implementations. The HyB_BR algorithm makes simultaneous genomic prediction, QTL mapping and inference of genetic architecture feasible in large genomic data sets.
First-Principles Monte Carlo Simulations of Reaction Equilibria in Compressed Vapors
2016-01-01
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gas equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO2 and N2O in mole fractions approaching 1%, whereas N3 and O3 are not observed. The equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data. PMID:27413785
NASA Astrophysics Data System (ADS)
Ardila, L. A. Peña; Giorgini, S.
2015-09-01
We investigate the properties of an impurity immersed in a dilute Bose gas at zero temperature using quantum Monte Carlo methods. The interactions between bosons are modeled by a hard-sphere potential with scattering length a , whereas the interactions between the impurity and the bosons are modeled by a short-range, square-well potential where both the sign and the strength of the scattering length b can be varied by adjusting the well depth. We characterize the attractive and the repulsive polaron branch by calculating the binding energy and the effective mass of the impurity. Furthermore, we investigate the structural properties of the bath, such as the impurity-boson contact parameter and the change of the density profile around the impurity. At the unitary limit of the impurity-boson interaction, we find that the effective mass of the impurity remains smaller than twice its bare mass, while the binding energy scales with ℏ2n2 /3/m , where n is the density of the bath and m is the common mass of the impurity and the bosons in the bath. The implications for the phase diagram of binary Bose-Bose mixtures at low concentrations are also discussed.
Simulation Analysis of Computer-Controlled pressurization for Mixture Ratio Control
NASA Technical Reports Server (NTRS)
Alexander, Leslie A.; Bishop-Behel, Karen; Benfield, Michael P. J.; Kelley, Anthony; Woodcock, Gordon R.
2005-01-01
A procedural code (C++) simulation was developed to investigate potentials for mixture ratio control of pressure-fed spacecraft rocket propulsion systems by measuring propellant flows, tank liquid quantities, or both, and using feedback from these measurements to adjust propellant tank pressures to set the correct operating mixture ratio for minimum propellant residuals. The pressurization system eliminated mechanical regulators in favor of a computer-controlled, servo- driven throttling valve. We found that a quasi-steady state simulation (pressure and flow transients in the pressurization systems resulting from changes in flow control valve position are ignored) is adequate for this purpose. Monte-Carlo methods are used to obtain simulated statistics on propellant depletion. Mixture ratio control algorithms based on proportional-integral-differential (PID) controller methods were developed. These algorithms actually set target tank pressures; the tank pressures are controlled by another PID controller. Simulation indicates this approach can provide reductions in residual propellants.
Structure, thermodynamics, and solubility in tetromino fluids.
Barnes, Brian C; Siderius, Daniel W; Gelb, Lev D
2009-06-16
To better understand the self-assembly of small molecules and nanoparticles adsorbed at interfaces, we have performed extensive Monte Carlo simulations of a simple lattice model based on the seven hard "tetrominoes", connected shapes that occupy four lattice sites. The equations of state of the pure fluids and all of the binary mixtures are determined over a wide range of density, and a large selection of multicomponent mixtures are also studied at selected conditions. Calculations are performed in the grand canonical ensemble and are analogous to real systems in which molecules or nanoparticles reversibly adsorb to a surface or interface from a bulk reservoir. The model studied is athermal; objects in these simulations avoid overlap but otherwise do not interact. As a result, all of the behavior observed is entropically driven. The one-component fluids all exhibit marked self-ordering tendencies at higher densities, with quite complex structures formed in some cases. Significant clustering of objects with the same rotational state (orientation) is also observed in some of the pure fluids. In all of the binary mixtures, the two species are fully miscible at large scales, but exhibit strong species-specific clustering (segregation) at small scales. This behavior persists in multicomponent mixtures; even in seven-component mixtures of all the shapes there is significant association between objects of the same shape. To better understand these phenomena, we calculate the second virial coefficients of the tetrominoes and related quantities, extract thermodynamic volume of mixing data from the simulations of binary mixtures, and determine Henry's law solubilities for each shape in a variety of solvents. The overall picture obtained is one in which complementarity of both the shapes of individual objects and the characteristic structures of different fluids are important in determining the overall behavior of a fluid of a given composition, with sometimes counterintuitive results. Finally, we note that no sharp phase transitions are observed but that this appears to be due to the small size of the objects considered. It is likely that complex phase behavior may be found in systems of larger polyominoes.
Excess thermodynamics of mixtures involving xenon and light linear alkanes by computer simulation.
Carvalho, A J Palace; Ramalho, J P Prates; Martins, Luís F G
2007-06-14
Excess molar enthalpies and excess molar volumes as a function of composition for liquid mixtures of xenon + ethane (at 161.40 K), xenon + propane (at 161.40 K) and xenon + n-butane (at 182.34 K) have been obtained by Monte Carlo computer simulations and compared with available experimental data. Simulation conditions were chosen to closely match those of the corresponding experimental results. The TraPPE-UA force field was selected among other force fields to model all the alkanes studied, whereas the one-center Lennard-Jones potential from Bohn et al. was used for xenon. The calculated H(m)(E) and V(m)(E) for all systems are negative, increasing in magnitude as the alkane chain length increases. The results for these systems were compared with experimental data and with other theoretical calculations using the SAFT approach. An excellent agreement between simulation and experimental results was found for xenon + ethane system, whereas for the remaining two systems, some deviations that become progressively more significant as the alkane chain length increases were observed.
Monte-Carlo computation of turbulent premixed methane/air ignition
NASA Astrophysics Data System (ADS)
Carmen, Christina Lieselotte
The present work describes the results obtained by a time dependent numerical technique that simulates the early flame development of a spark-ignited premixed, lean, gaseous methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. The algorithm described is based upon a sub-model developed by an international automobile research and manufacturing corporation in order to analyze turbulence conditions within internal combustion engines. Several developments and modifications to the original algorithm have been implemented including a revised chemical reaction scheme and the evaluation and calculation of various turbulent flame properties. Solution of the complete set of Navier-Stokes governing equations for a turbulent reactive flow is avoided by reducing the equations to a single transport equation. The transport equation is derived from the Navier-Stokes equations for a joint probability density function, thus requiring no closure assumptions for the Reynolds stresses. A Monte-Carlo method is also utilized to simulate phenomena represented by the probability density function transport equation by use of the method of fractional steps. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on the evaluation of the three primary parameters that influence the initial flame kernel growth-the ignition system characteristics, the mixture composition, and the nature of the flow field. Efforts are concentrated on the effects of moderate to intense turbulence on flames within the distributed reaction zone. Results are presented for lean conditions with the fuel equivalence ratio varying from 0.6 to 0.9. The present computational results, including flame regime analysis and the calculation of various flame speeds, provide excellent agreement with results obtained by other experimental and numerical researchers.
Irreversible opinion spreading on scale-free networks
NASA Astrophysics Data System (ADS)
Candia, Julián
2007-02-01
We study the dynamical and critical behavior of a model for irreversible opinion spreading on Barabási-Albert (BA) scale-free networks by performing extensive Monte Carlo simulations. The opinion spreading within an inhomogeneous society is investigated by means of the magnetic Eden model, a nonequilibrium kinetic model for the growth of binary mixtures in contact with a thermal bath. The deposition dynamics, which is studied as a function of the degree of the occupied sites, shows evidence for the leading role played by hubs in the growth process. Systems of finite size grow either ordered or disordered, depending on the temperature. By means of standard finite-size scaling procedures, the effective order-disorder phase transitions are found to persist in the thermodynamic limit. This critical behavior, however, is absent in related equilibrium spin systems such as the Ising model on BA scale-free networks, which in the thermodynamic limit only displays a ferromagnetic phase. The dependence of these results on the degree exponent is also discussed for the case of uncorrelated scale-free networks.
Composition inversion in mixtures of binary colloids and polymer
NASA Astrophysics Data System (ADS)
Zhang, Isla; Pinchaipat, Rattachai; Wilding, Nigel B.; Faers, Malcolm A.; Bartlett, Paul; Evans, Robert; Royall, C. Patrick
2018-05-01
Understanding the phase behaviour of mixtures continues to pose challenges, even for systems that might be considered "simple." Here, we consider a very simple mixture of two colloidal and one non-adsorbing polymer species, which can be simplified even further to a size-asymmetrical binary mixture, in which the effective colloid-colloid interactions depend on the polymer concentration. We show that this basic system exhibits surprisingly rich phase behaviour. In particular, we enquire whether such a system features only a liquid-vapor phase separation (as in one-component colloid-polymer mixtures) or whether, additionally, liquid-liquid demixing of two colloidal phases can occur. Particle-resolved experiments show demixing-like behaviour, but when combined with bespoke Monte Carlo simulations, this proves illusory, and we reveal that only a single liquid-vapor transition occurs. Progressive migration of the small particles to the liquid phase as the polymer concentration increases gives rise to composition inversion—a maximum in the large particle concentration in the liquid phase. Close to criticality, the density fluctuations are found to be dominated by the larger colloids.
The electroluminescence of Xe-Ne gas mixtures: A Monte Carol simulation study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, F.P.; Dias, T.H.V.T.; Rachinhas, P.J.B.M.
1998-04-01
The authors have performed a Monte Carlo simulation of the drift of electrons through a mixture of gaseous xenon with the lighter noble gas neon at a total pressure of 1 atm. The electroluminescence characteristics and other transport parameters are investigated as a function of the reduced electric field and composition of the mixture. For Xe-Ne mixtures with 5, 10, 20, 40, 70, 90, and 100% of Xe, they present results for electroluminescence yield and excitation efficiency, average electron energy, electron drift velocity, reduced mobility, reduced diffusion coefficients, and characteristic energies over a range of reduced electric fields which excludemore » electron multiplication. For the 5% Xe mixture, they also assess the influence of electron multiplication on the electroluminescence yield. The present study of Xe-Ne mixtures was motivated by an interest in using them as a filling for gas proportional scintillation counters in low-energy X-ray applications. In this energy range, the X rays will penetrate further into the detector due to the presence of Ne, and this will lead to an improvement in the collection of primary electrons originating near the detector window and may represent an advantage over the use of pure Xe.« less
Molecular simulations of a CO2/CO mixture in MIL-127
NASA Astrophysics Data System (ADS)
Chokbunpiam, Tatiya; Fritzsche, Siegfried; Parasuk, Vudhichai; Caro, Jürgen; Assabumrungrat, Suttichai
2018-03-01
Adsorption and diffusion of an equimolar feed mixture of CO2 and CO in MIL-127 at three different temperatures and pressures up to 12 bar were investigated by molecular simulations. The adsorption was simulated using Gibbs-Ensemble Monte Carlo (GEMC). The structure of the adsorbed phase and the diffusion in the MIL were investigated using Molecular Dynamics (MD) simulations. The adsorption selectivity of MIL-127 for CO2 over CO at 233 K was about 15. When combining adsorption and diffusion selectivities, a membrane selectivity of about 12 is predicted. For higher temperatures, both adsorption and diffusion selectivity are found to be smaller.
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrisson, G.; Marleau, G.
2012-07-01
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculationmore » performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)« less
Henderson, Douglas; Silvestre-Alcantara, Whasington; Kaja, Monika; ...
2016-08-18
Here, the density functional theory is applied to a study of the structure and differential capacitance of a planar electric double layer formed by a valency asymmetric mixture of charged dimers and monomers. The dimer consists of two tangentially tethered hard spheres of equal diameters of which one is charged and the other is neutral, while the monomer is a charged hard sphere of the same size. The dimer electrolyte is next to a uniformly charged, smooth planar electrode. The electrode-particle singlet distributions, the mean electrostatic potential, and the differential capacitance for the model double layer are evaluated for amore » 2:1/1:2 valency electrolyte at a given concentration. Important consequences of asymmetry in charges and in ion shapes are (i) a finite, non-zero potential of zero charge, and (ii) asymmetric shaped 2:1 and 1:2 capacitance curves which are not mirror images of each other. Comparisons of the density functional results with the corresponding Monte Carlo simulations show the theoretical predictions to be in good agreement with the simulations overall except near zero surface charge.« less
Complexation behavior of oppositely charged polyelectrolytes: Effect of charge distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Mingtian; Li, Baohui, E-mail: dliang@pku.edu.cn, E-mail: baohui@nankai.edu.cn; Zhou, Jihan
Complexation behavior of oppositely charged polyelectrolytes in a solution is investigated using a combination of computer simulations and experiments, focusing on the influence of polyelectrolyte charge distributions along the chains on the structure of the polyelectrolyte complexes. The simulations are performed using Monte Carlo with the replica-exchange algorithm for three model systems where each system is composed of a mixture of two types of oppositely charged model polyelectrolyte chains (EGEG){sub 5}/(KGKG){sub 5}, (EEGG){sub 5}/(KKGG){sub 5}, and (EEGG){sub 5}/(KGKG){sub 5}, in a solution including explicit solvent molecules. Among the three model systems, only the charge distributions along the chains are notmore » identical. Thermodynamic quantities are calculated as a function of temperature (or ionic strength), and the microscopic structures of complexes are examined. It is found that the three systems have different transition temperatures, and form complexes with different sizes, structures, and densities at a given temperature. Complex microscopic structures with an alternating arrangement of one monolayer of E/K monomers and one monolayer of G monomers, with one bilayer of E and K monomers and one bilayer of G monomers, and with a mixture of monolayer and bilayer of E/K monomers in a box shape and a trilayer of G monomers inside the box are obtained for the three mixture systems, respectively. The experiments are carried out for three systems where each is composed of a mixture of two types of oppositely charged peptide chains. Each peptide chain is composed of Lysine (K) and glycine (G) or glutamate (E) and G, in solution, and the chain length and amino acid sequences, and hence the charge distribution, are precisely controlled, and all of them are identical with those for the corresponding model chain. The complexation behavior and complex structures are characterized through laser light scattering and atomic force microscopy measurements. The order of the apparent weight-averaged molar mass and the order of density of complexes observed from the three experimental systems are qualitatively in agreement with those predicted from the simulations.« less
Mixture model based joint-MAP reconstruction of attenuation and activity maps in TOF-PET
NASA Astrophysics Data System (ADS)
Hemmati, H.; Kamali-Asl, A.; Ghafarian, P.; Ay, M. R.
2018-06-01
A challenge to have quantitative positron emission tomography (PET) images is to provide an accurate and patient-specific photon attenuation correction. In PET/MR scanners, the nature of MR signals and hardware limitations have led to a real challenge on the attenuation map extraction. Except for a constant factor, the activity and attenuation maps from emission data on TOF-PET system can be determined by the maximum likelihood reconstruction of attenuation and activity approach (MLAA) from emission data. The aim of the present study is to constrain the joint estimations of activity and attenuation approach for PET system using a mixture model prior based on the attenuation map histogram. This novel prior enforces non-negativity and its hyperparameters can be estimated using a mixture decomposition step from the current estimation of the attenuation map. The proposed method can also be helpful on the solving of scaling problem and is capable to assign the predefined regional attenuation coefficients with some degree of confidence to the attenuation map similar to segmentation-based attenuation correction approaches. The performance of the algorithm is studied with numerical and Monte Carlo simulations and a phantom experiment and was compared with MLAA algorithm with and without the smoothing prior. The results demonstrate that the proposed algorithm is capable of producing the cross-talk free activity and attenuation images from emission data. The proposed approach has potential to be a practical and competitive method for joint reconstruction of activity and attenuation maps from emission data on PET/MR and can be integrated on the other methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, Peter; Varghese, Philip; Goldstein, David
We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less
First-principles Monte Carlo simulations of reaction equilibria in compressed vapors
Fetisov, Evgenii O.; Kuo, I-Feng William; Knight, Chris; ...
2016-06-13
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gasmore » equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO 2 and N 2O in mole fractions approaching 1%, whereas N 3 and O 3 are not observed. Lastly, the equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data.« less
First-principles Monte Carlo simulations of reaction equilibria in compressed vapors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fetisov, Evgenii O.; Kuo, I-Feng William; Knight, Chris
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gasmore » equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO 2 and N 2O in mole fractions approaching 1%, whereas N 3 and O 3 are not observed. Lastly, the equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data.« less
NASA Astrophysics Data System (ADS)
Cannavacciuolo, Luigi; Skov Pedersen, Jan; Schurtenberger, Peter
2002-03-01
Results of an extensive Monte Carlo (MC) study on both single and many semiflexible charged chains with excluded volume (EV) are summarized. The model employed has been tailored to mimic wormlike micelles in solution. Simulations have been performed at different ionic strengths of added salt, charge densities, chain lengths and volume fractions Φ, covering the dilute to concentrated regime. At infinite dilution the scattering functions can be fitted by the same fitting functions as for uncharged semiflexible chains with EV, provided that an electrostatic contribution bel is added to the bare Kuhn length. The scaling of bel is found to be more complex than the Odijk-Skolnick-Fixman predictions, and qualitatively compatible with more recent variational calculations. Universality in the scaling of the radius of gyration is found if all lengths are rescaled by the total Kuhn length. At finite concentrations, the simple model used is able to reproduce the structural peak in the scattering function S(q) observed in many experiments, as well as other properties of polyelectrolytes (PELs) in solution. Universal behaviour of the forward scattering S(0) is established after a rescaling of Φ. MC data are found to be in very good agreement with experimental scattering measurements with equilibrium PELs, which are giant wormlike micelles formed in mixtures of nonionic and ionic surfactants in dilute aqueous solution, with added salt.
A Novel Strategy for Numerical Simulation of High-speed Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Sheikhi, M. R. H.; Drozda, T. G.; Givi, P.
2003-01-01
The objective of this research is to improve and implement the filtered mass density function (FDF) methodology for large eddy simulation (LES) of high-speed reacting turbulent flows. We have just completed Year 1 of this research. This is the Final Report on our activities during the period: January 1, 2003 to December 31, 2003. 2002. In the efforts during the past year, LES is conducted of the Sandia Flame D, which is a turbulent piloted nonpremixed methane jet flame. The subgrid scale (SGS) closure is based on the scalar filtered mass density function (SFMDF) methodology. The SFMDF is basically the mass weighted probability density function (PDF) of the SGS scalar quantities. For this flame (which exhibits little local extinction), a simple flamelet model is used to relate the instantaneous composition to the mixture fraction. The modelled SFMDF transport equation is solved by a hybrid finite-difference/Monte Carlo scheme.
Powder agglomeration in a microgravity environment
NASA Technical Reports Server (NTRS)
Cawley, James D.
1994-01-01
This is the final report for NASA Grant NAG3-755 entitled 'Powder Agglomeration in a Microgravity Environment.' The research program included both two types of numerical models and two types of experiments. The numerical modeling included the use of Monte Carlo type simulations of agglomerate growth including hydrodynamic screening and molecular dynamics type simulations of the rearrangement of particles within an agglomerate under a gravitational field. Experiments included direct observation of the agglomeration of submicron alumina and indirect observation, using small angle light scattering, of the agglomeration of colloidal silica and aluminum monohydroxide. In the former class of experiments, the powders were constrained to move on a two-dimensional surface oriented to minimize the effect of gravity. In the latter, some experiments involved mixture of suspensions containing particles of opposite charge which resulted in agglomeration on a very short time scale relative to settling under gravity.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Optimization of intermolecular potential parameters for the CO2/H2O mixture.
Orozco, Gustavo A; Economou, Ioannis G; Panagiotopoulos, Athanassios Z
2014-10-02
Monte Carlo simulations in the Gibbs ensemble were used to obtain optimized intermolecular potential parameters to describe the phase behavior of the mixture CO2/H2O, over a range of temperatures and pressures relevant for carbon capture and sequestration processes. Commonly used fixed-point-charge force fields that include Lennard-Jones 12-6 (LJ) or exponential-6 (Exp-6) terms were used to describe CO2 and H2O intermolecular interactions. For force fields based on the LJ functional form, changes of the unlike interactions produced higher variations in the H2O-rich phase than in the CO2-rich phase. A major finding of the present study is that for these potentials, no combination of unlike interaction parameters is able to adequately represent properties of both phases. Changes to the partial charges of H2O were found to produce significant variations in both phases and are able to fit experimental data in both phases, at the cost of inaccuracies for the pure H2O properties. By contrast, for the Exp-6 case, optimization of a single parameter, the oxygen-oxygen unlike-pair interaction, was found sufficient to give accurate predictions of the solubilities in both phases while preserving accuracy in the pure component properties. These models are thus recommended for future molecular simulation studies of CO2/H2O mixtures.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger
2008-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael
2007-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
Molecular simulation of thermodynamic and transport properties for the H{sub 2}O+NaCl system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orozco, Gustavo A.; Jiang, Hao; Panagiotopoulos, Athanassios Z., E-mail: azp@princeton.edu
Molecular dynamics and Monte Carlo simulations have been carried out to obtain thermodynamic and transport properties of the binary mixture H{sub 2}O+NaCl at temperatures from T = 298 to 473 K. In particular, vapor pressures, liquid densities, viscosities, and vapor-liquid interfacial tensions have been obtained as functions of pressure and salt concentration. Several previously proposed fixed-point-charge models that include either Lennard-Jones (LJ) 12-6 or exponential-6 (Exp6) functional forms to describe non-Coulombic interactions were studied. In particular, for water we used the SPC and SPC/E (LJ) models in their rigid forms, a semiflexible version of the SPC/E (LJ) model, and themore » Errington-Panagiotopoulos Exp6 model; for NaCl, we used the Smith-Dang and Joung-Cheatham (LJ) parameterizations as well as the Tosi-Fumi (Exp6) model. While none of the model combinations are able to reproduce simultaneously all target properties, vapor pressures are well represented using the SPC plus Joung-Cheathem model combination, and all LJ models do well for the liquid density, with the semiflexible SPC/E plus Joung-Cheatham combination being the most accurate. For viscosities, the combination of rigid SPC/E plus Smith-Dang is the best alternative. For interfacial tensions, the combination of the semiflexible SPC/E plus Smith-Dang or Joung-Cheatham gives the best results. Inclusion of water flexibility improves the mixture densities and interfacial tensions, at the cost of larger deviations for the vapor pressures and viscosities. The Exp6 water plus Tosi-Fumi salt model combination was found to perform poorly for most of the properties of interest, in particular being unable to describe the experimental trend for the vapor pressure as a function of salt concentration.« less
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
Iterative Importance Sampling Algorithms for Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less
NASA Astrophysics Data System (ADS)
Wu, Liang; Malijevský, Alexandr; Avendaño, Carlos; Müller, Erich A.; Jackson, George
2018-04-01
A molecular simulation study of binary mixtures of hard spherocylinders (HSCs) and hard spheres (HSs) confined between two structureless hard walls is presented. The principal aim of the work is to understand the effect of the presence of hard spheres on the entropically driven surface nematization of hard rod-like particles at surfaces. The mixtures are studied using a constant normal-pressure Monte Carlo algorithm. The surface adsorption at different compositions is examined in detail. At moderate hard-sphere concentrations, preferential adsorption of the spheres at the wall is found. However, at moderate to high pressure (density), we observe a crossover in the adsorption behavior with nematic layers of the rods forming at the walls leading to local demixing of the system. The presence of the spherical particles is seen to destabilize the surface nematization of the rods, and the degree of demixing increases on increasing the hard-sphere concentration.
Wu, Liang; Malijevský, Alexandr; Avendaño, Carlos; Müller, Erich A; Jackson, George
2018-04-28
A molecular simulation study of binary mixtures of hard spherocylinders (HSCs) and hard spheres (HSs) confined between two structureless hard walls is presented. The principal aim of the work is to understand the effect of the presence of hard spheres on the entropically driven surface nematization of hard rod-like particles at surfaces. The mixtures are studied using a constant normal-pressure Monte Carlo algorithm. The surface adsorption at different compositions is examined in detail. At moderate hard-sphere concentrations, preferential adsorption of the spheres at the wall is found. However, at moderate to high pressure (density), we observe a crossover in the adsorption behavior with nematic layers of the rods forming at the walls leading to local demixing of the system. The presence of the spherical particles is seen to destabilize the surface nematization of the rods, and the degree of demixing increases on increasing the hard-sphere concentration.
A Small Aircraft Transportation System (SATS) Demand Model
NASA Technical Reports Server (NTRS)
Long, Dou; Lee, David; Johnson, Jesse; Kostiuk, Peter; Yackovetsky, Robert (Technical Monitor)
2001-01-01
The Small Aircraft Transportation System (SATS) demand modeling is a tool that will be useful for decision-makers to analyze SATS demands in both airport and airspace. We constructed a series of models following the general top-down, modular principles in systems engineering. There are three principal models, SATS Airport Demand Model (SATS-ADM), SATS Flight Demand Model (SATS-FDM), and LMINET-SATS. SATS-ADM models SATS operations, by aircraft type, from the forecasts in fleet, configuration and performance, utilization, and traffic mixture. Given the SATS airport operations such as the ones generated by SATS-ADM, SATS-FDM constructs the SATS origin and destination (O&D) traffic flow based on the solution of the gravity model, from which it then generates SATS flights using the Monte Carlo simulation based on the departure time-of-day profile. LMINET-SATS, an extension of LMINET, models SATS demands at airspace and airport by all aircraft operations in US The models use parameters to provide the user with flexibility and ease of use to generate SATS demand for different scenarios. Several case studies are included to illustrate the use of the models, which are useful to identify the need for a new air traffic management system to cope with SATS.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
A method of using cluster analysis to study statistical dependence in multivariate data
NASA Technical Reports Server (NTRS)
Borucki, W. J.; Card, D. H.; Lyle, G. C.
1975-01-01
A technique is presented that uses both cluster analysis and a Monte Carlo significance test of clusters to discover associations between variables in multidimensional data. The method is applied to an example of a noisy function in three-dimensional space, to a sample from a mixture of three bivariate normal distributions, and to the well-known Fisher's Iris data.
NASA Astrophysics Data System (ADS)
Le Foll, S.; André, F.; Delmas, A.; Bouilly, J. M.; Aspa, Y.
2012-06-01
A backward Monte Carlo method for modelling the spectral directional emittance of fibrous media has been developed. It uses Mie theory to calculate the radiative properties of single fibres, modelled as infinite cylinders, and the complex refractive index is computed by a Drude-Lorenz model for the dielectric function. The absorption and scattering coefficient are homogenised over several fibres, but the scattering phase function of a single one is used to determine the scattering direction of energy inside the medium. Sensitivity analysis based on several Monte Carlo results has been performed to estimate coefficients for a Multi-Linear Model (MLM) specifically developed for inverse analysis of experimental data. This model concurs with the Monte Carlo method and is highly computationally efficient. In contrast, the surface emissivity model, which assumes an opaque medium, shows poor agreement with the reference Monte Carlo calculations.
Maurya, Manish; Singh, Jayant K
2017-01-28
Grand canonical Monte Carlo (GCMC) simulation is used to study the adsorption of pure SO 2 using a functionalized bilayer graphene nanoribbon (GNR) at 303 K. The functional groups considered in this work are OH, COOH, NH 2 , NO 2 , and CH 3 . The mole percent of functionalization considered in this work is in the range of 3.125%-6.25%. GCMC simulation is further used to study the selective adsorption of SO 2 from binary and ternary mixtures of SO 2 , CO 2 , and N 2 , of variable composition using the functionalized bilayer graphene nanoribbon at 303 K. This study shows that the adsorption and selectivity of SO 2 increase after the functionalization of the nanoribbon compared to the hydrogen terminated nanoribbon. The order of adsorption capacity and selectivity of the functionalized nanoribbon is found to follow the order COOH > NO 2 > NH 2 > CH 3 > OH > H. The selectivity of SO 2 is found to be maximum at a pressure less than 0.2 bar. Furthermore, SO 2 selectivity and adsorption capacity decrease with increase in the molar ratio of SO 2 /N 2 mixture from 1:1 to 1:9. In the case of ternary mixture of SO 2 , CO 2 , N 2 , having compositions of 0.05, 0.15, 0.8, the selectivity of SO 2 over N 2 is higher than that of CO 2 over N 2 . The maximum selectivity of SO 2 over CO 2 is observed for the COOH functionalized GNR followed by NO 2 and other functionalized GNRs.
NASA Astrophysics Data System (ADS)
Sowers, Susanne Lynn
1997-11-01
Microporous sorbents such as carbons, silicas and aluminas are used commercially in a variety of separation, purification and selective reaction applications. A detailed study of the effects of the porous material characteristics on the adsorption equilibrium properties such as selectivity and phase equilibria of fluid mixtures can enhance our understanding of adsorption on a molecular level. Such knowledge will improve our utilization of such adsorbents and provide a tool for directing the future of tailoring sorbents for particular separation processes. The effect of pore size, shape and pressure on the selective adsorption of trace pollutants from an inert gas was studied using prototype mixtures of Lennard-Tones (LJ) N2/CCl4, CF4, and SO2. Both nonlocal density functional theory (DFT) and grand canonical Monte Carlo (GCMC) molecular simulations were used in order to investigate the validity of the theory, which is much quicker and easier to use. Our results indicate that there is an optimal pore size and shape for which the pollutant selectivity is greatly enhanced. In many industrial adsorption processes relative humidity can greatly affect the life of an adsorbent bed, as seen in breakthrough curves. Therefore, the influence of water vapor on the selective adsorption of CCl4 from a mixture of N2/CCl4/H20 in activated carbon was studied using GCMC simulations. The equilibrium adsorption properties are found to be dependent upon both the density of active sites on the pore walls and the relative humidity. Liquid-liquid transitions in porous materials are of interest in connection with oil recovery, lubrication, coating technology and pollution control. The results of a study on the effect of confinement on the liquid-liquid equilibrium of binary LJ mixtures using DFT are compared with those of molecular simulation and experiments. Our findings show that the phase coexistence for the confined mixture is in general decreased and shifted toward the component which is more attracted to the pore walls. The data obtained from DFT, simulations, and experiment are in qualitative agreement and have aided in the understanding of this phenomenon.
[Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].
Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao
2015-05-01
Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are selected for characteristic peak power integration to determine components content of mineral oil mixture of gasoline, kerosene and diesel by optimal algorithm. Compared with single point measurement of peak method and mean method, measurement sensitivity is improved about 50 times. The implementation of high precision measurement of mixture components content of gasoline, kerosene and diesel provides a practical algorithm for components content direct determination of spectra overlapping mixture without chemical separation.
Presumed PDF Modeling of Early Flame Propagation in Moderate to Intense Turbulence Environments
NASA Technical Reports Server (NTRS)
Carmen, Christina; Feikema, Douglas A.
2003-01-01
The present paper describes the results obtained from a one-dimensional time dependent numerical technique that simulates early flame propagation in a moderate to intense turbulent environment. Attention is focused on the development of a spark-ignited, premixed, lean methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. A Monte-Carlo particle tracking method, based upon the method of fractional steps, is utilized to simulate the phenomena represented by a probability density function (PDF) transport equation. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on three primary parameters that influence the initial flame kernel growth: the detailed ignition system characteristics, the mixture composition, and the nature of the flow field. The computational results of moderate and intense isotropic turbulence suggests that flames within the distributed reaction zone are not as vulnerable, as traditionally believed, to the adverse effects of increased turbulence intensity. It is also shown that the magnitude of the flame front thickness significantly impacts the turbulent consumption flame speed. Flame conditions studied have fuel equivalence ratio s in the range phi = 0.6 to 0.9 at standard temperature and pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudhyadhom, A; McGuinness, C; Descovich, M
Purpose: To develop a methodology for validation of a Monte-Carlo dose calculation model for robotic small field SRS/SBRT deliveries. Methods: In a robotic treatment planning system, a Monte-Carlo model was iteratively optimized to match with beam data. A two-part analysis was developed to verify this model. 1) The Monte-Carlo model was validated in a simulated water phantom versus a Ray-Tracing calculation on a single beam collimator-by-collimator calculation. 2) The Monte-Carlo model was validated to be accurate in the most challenging situation, lung, by acquiring in-phantom measurements. A plan was created and delivered in a CIRS lung phantom with film insert.more » Separately, plans were delivered in an in-house created lung phantom with a PinPoint chamber insert within a lung simulating material. For medium to large collimator sizes, a single beam was delivered to the phantom. For small size collimators (10, 12.5, and 15mm), a robotically delivered plan was created to generate a uniform dose field of irradiation over a 2×2cm{sup 2} area. Results: Dose differences in simulated water between Ray-Tracing and Monte-Carlo were all within 1% at dmax and deeper. Maximum dose differences occurred prior to dmax but were all within 3%. Film measurements in a lung phantom show high correspondence of over 95% gamma at the 2%/2mm level for Monte-Carlo. Ion chamber measurements for collimator sizes of 12.5mm and above were within 3% of Monte-Carlo calculated values. Uniform irradiation involving the 10mm collimator resulted in a dose difference of ∼8% for both Monte-Carlo and Ray-Tracing indicating that there may be limitations with the dose calculation. Conclusion: We have developed a methodology to validate a Monte-Carlo model by verifying that it matches in water and, separately, that it corresponds well in lung simulating materials. The Monte-Carlo model and algorithm tested may have more limited accuracy for 10mm fields and smaller.« less
Superfluid drag in the two-component Bose-Hubbard model
NASA Astrophysics Data System (ADS)
Sellin, Karl; Babaev, Egor
2018-03-01
In multicomponent superfluids and superconductors, co- and counterflows of components have, in general, different properties. A. F. Andreev and E. P. Bashkin [Sov. Phys. JETP 42, 164 (1975)] discussed, in the context of He3/He4 superfluid mixtures, that interparticle interactions produce a dissipationless drag. The drag can be understood as a superflow of one component induced by phase gradients of the other component. Importantly, the drag can be both positive (entrainment) and negative (counterflow). The effect is known to have crucial importance for many properties of diverse physical systems ranging from the dynamics of neutron stars and rotational responses of Bose mixtures of ultracold atoms to magnetic responses of multicomponent superconductors. Although substantial literature exists that includes the drag interaction phenomenologically, only a few regimes are covered by quantitative studies of the microscopic origin of the drag and its dependence on microscopic parameters. Here we study the microscopic origin and strength of the drag interaction in a quantum system of two-component bosons on a lattice with short-range interaction. By performing quantum Monte Carlo simulations of a two-component Bose-Hubbard model we obtain dependencies of the drag strength on the boson-boson interactions and properties of the optical lattice. Of particular interest are the strongly correlated regimes where the ratio of coflow and counterflow superfluid stiffnesses can diverge, corresponding to the case of saturated drag.
NASA Astrophysics Data System (ADS)
Edison, John R.; Dasgupta, Tonnishtha; Dijkstra, Marjolein
2016-08-01
We study the phase behaviour of a binary mixture of colloidal hard spheres and freely jointed chains of beads using Monte Carlo simulations. Recently Panagiotopoulos and co-workers predicted [Nat. Commun. 5, 4472 (2014)] that the hexagonal close packed (HCP) structure of hard spheres can be stabilized in such a mixture due to the interplay between polymer and the void structure in the crystal phase. Their predictions were based on estimates of the free-energy penalty for adding a single hard polymer chain in the HCP and the competing face centered cubic (FCC) phase. Here we calculate the phase diagram using free-energy calculations of the full binary mixture and find a broad fluid-solid coexistence region and a metastable gas-liquid coexistence region. For the colloid-monomer size ratio considered in this work, we find that the HCP phase is only stable in a small window at relatively high polymer reservoir packing fractions, where the coexisting HCP phase is nearly close packed. Additionally we investigate the structure and dynamic behaviour of these mixtures.
2013-07-01
also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32 B. MCNP PHYSICS OPTIONS ......................................................................................... 33 C. HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trinh, Thi-Kim-Hoang; Laboratoire de Science des Procédés et des Matériaux; Passarello, Jean-Philippe, E-mail: Jean-Philippe.Passarello@lspm.cnrs.fr
This work consists of the adaptation of a non-additive hard sphere theory inspired by Malakhov and Volkov [Polym. Sci., Ser. A 49(6), 745–756 (2007)] to a square-well chain. Using the thermodynamic perturbation theory, an additional term is proposed that describes the effect of perturbing the chain of square well spheres by a non-additive parameter. In order to validate this development, NPT Monte Carlo simulations of thermodynamic and structural properties of the non-additive square well for a pure chain and a binary mixture of chains are performed. Good agreements are observed between the compressibility factors originating from the theory and thosemore » from molecular simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zillich, Robert E., E-mail: robert.zillich@jku.at
2015-11-15
We construct an accurate imaginary time propagator for path integral Monte Carlo simulations for heterogeneous systems consisting of a mixture of atoms and molecules. We combine the pair density approximation, which is highly accurate but feasible only for the isotropic interactions between atoms, with the Takahashi–Imada approximation for general interactions. We present finite temperature simulations results for energy and structure of molecules–helium clusters X{sup 4}He{sub 20} (X=HCCH and LiH) which show a marked improvement over the Trotter approximation which has a 2nd-order time step bias. We show that the 4th-order corrections of the Takahashi–Imada approximation can also be applied perturbativelymore » to a 2nd-order simulation.« less
NASA Astrophysics Data System (ADS)
Mastrogiuseppe, M.; Hayes, A. G.; Poggiali, V.; Lunine, J. I.; Lorenz, R. D.; Seu, R.; Le Gall, A.; Notarnicola, C.; Mitchell, K. L.; Malaska, M.; Birch, S. P. D.
2018-01-01
Recently, the Cassini RADAR was used to sound hydrocarbon lakes and seas on Saturn's moon Titan. Since the initial discovery of echoes from the seabed of Ligeia Mare, the second largest liquid body on Titan, a dedicated radar processing chain has been developed to retrieve liquid depth and microwave absorptivity information from RADAR altimetry of Titan's lakes and seas. Herein, we apply this processing chain to altimetry data acquired over southern Ontario Lacus during Titan fly-by T49 in December 2008. The new signal processing chain adopts super resolution techniques and dedicated taper functions to reveal the presence of reflection from Ontario's lakebed. Unfortunately, the extracted waveforms from T49 are often distorted due to signal saturation, owing to the extraordinarily strong specular reflections from the smooth lake surface. This distortion is a function of the saturation level and can introduce artifacts, such as signal precursors, which complicate data interpretation. We use a radar altimetry simulator to retrieve information from the saturated bursts and determine the liquid depth and loss tangent of Ontario Lacus. Received waveforms are represented using a two-layer model, where Cassini raw radar data are simulated in order to reproduce the effects of receiver saturation. A Monte Carlo based approach along with a simulated waveform look-up table is used to retrieve parameters that are given as inputs to a parametric model which constrains radio absorption of Ontario Lacus and retrieves information about the dielectric properties of the liquid. We retrieve a maximum depth of 50 m along the radar transect and a best-fit specific attenuation of the liquid equal to 0.2 ± 0.09 dB m-1 that, when converted into loss tangent, gives tanδ = 7 ± 3 × 10-5. When combined with laboratory measured cryogenic liquid alkane dielectric properties and the variable solubility of nitrogen in ethane-methane mixtures, the best-fit loss tangent is consistent with a ternary mixture of 51% methane, 38% ethane and 11% nitrogen by volume.
A quasichemical approach for protein-cluster free energies in dilute solution
NASA Astrophysics Data System (ADS)
Young, Teresa M.; Roberts, Christopher J.
2007-10-01
Reversible formation of protein oligomers or small clusters is a key step in processes such as protein polymerization, fibril formation, and protein phase separation from dilute solution. A straightforward, statistical mechanical approach to accurately calculate cluster free energies in solution is presented using a cell-based, quasichemical (QC) approximation for the partition function of proteins in an implicit solvent. The inputs to the model are the protein potential of mean force (PMF) and the corresponding subcell degeneracies up to relatively low particle densities. The approach is tested using simple two and three dimensional lattice models in which proteins interact with either isotropic or anisotropic nearest-neighbor attractions. Comparison with direct Monte Carlo simulation shows that cluster probabilities and free energies of oligomer formation (ΔGi0) are quantitatively predicted by the QC approach for protein volume fractions ˜10-2 (weight/volume concentration ˜10gl-1) and below. For small clusters, ΔGi0 depends weakly on the strength of short-ranged attractive interactions for most experimentally relevant values of the normalized osmotic second virial coefficient (b2*). For larger clusters (i ≫2), there is a small but non-negligible b2* dependence. The results suggest that nonspecific, hydrophobic attractions may not significantly stabilize prenuclei in processes such as non-native aggregation. Biased Monte Carlo methods are shown to accurately provide subcell degeneracies that are intractable to obtain analytically or by direct enumeration, and so offer a means to generalize the approach to mixtures and proteins with more complex PMFs.
Drag effects and vortex states in binary superfluids in optical lattices
NASA Astrophysics Data System (ADS)
Meyerovich, Alexander; Kuklov, Anatoly
2005-03-01
Drag effects in two-condensate superfluids (A and B) in optical lattices are explored in strongly interacting limit. Mutual drag changes circulation quanta of vortices depending on the component concentration and interaction. This is a lattice analog of ^3He-HeII mixtures, in which the drag, proportional to the difference between bare and effective masses of quasiparticles, causes pressure-driven transitions in vortex charges [1]. The vortex binding in the hard-core boson limit relies, in contrast to the soft-core case studied in Monte Carlo simulations [2], on the vacancy-assisted tunneling. The model lattice for study of such effects is introduced. The variational and Monte Carlo calculations for the system, in which the tunneling for component A depends on the concentration of B, show the possibility of formation of the quasi-molecular condensate ABm in addition to the condensates of A and B. A strong drag, leading to the composite vortices with multiple quanta, also becomes possible. The work is supported by NSF grants DMR-0077266 and ITR-405460001 and PSC-CUNY- 665560035. 1. A. E. Meyerovich, Phys. Rev. A 68, 05162 (2003); Sov. Phys.-JETP 60, 41 (1984) 2. A. Kuklov, N. Prokof'ev, and B. Svistunov, Phys. Rev. Lett. 92, 030403 (2004)
Miksys, N; Xu, C; Beaulieu, L; Thomson, R M
2015-08-07
This work investigates and compares CT image metallic artifact reduction (MAR) methods and tissue assignment schemes (TAS) for the development of virtual patient models for permanent implant brachytherapy Monte Carlo (MC) dose calculations. Four MAR techniques are investigated to mitigate seed artifacts from post-implant CT images of a homogeneous phantom and eight prostate patients: a raw sinogram approach using the original CT scanner data and three methods (simple threshold replacement (STR), 3D median filter, and virtual sinogram) requiring only the reconstructed CT image. Virtual patient models are developed using six TAS ranging from the AAPM-ESTRO-ABG TG-186 basic approach of assigning uniform density tissues (resulting in a model not dependent on MAR) to more complex models assigning prostate, calcification, and mixtures of prostate and calcification using CT-derived densities. The EGSnrc user-code BrachyDose is employed to calculate dose distributions. All four MAR methods eliminate bright seed spot artifacts, and the image-based methods provide comparable mitigation of artifacts compared with the raw sinogram approach. However, each MAR technique has limitations: STR is unable to mitigate low CT number artifacts, the median filter blurs the image which challenges the preservation of tissue heterogeneities, and both sinogram approaches introduce new streaks. Large local dose differences are generally due to differences in voxel tissue-type rather than mass density. The largest differences in target dose metrics (D90, V100, V150), over 50% lower compared to the other models, are when uncorrected CT images are used with TAS that consider calcifications. Metrics found using models which include calcifications are generally a few percent lower than prostate-only models. Generally, metrics from any MAR method and any TAS which considers calcifications agree within 6%. Overall, the studied MAR methods and TAS show promise for further retrospective MC dose calculation studies for various permanent implant brachytherapy treatments.
Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...
Cruz-Marcelo, Alejandro; Ensor, Katherine B; Rosner, Gary L
2011-06-01
The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material.
Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.
2011-01-01
The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566
Martin-Calvo, Ana; Van der Perre, Stijn; Claessens, Benjamin; Calero, Sofia; Denayer, Joeri F M
2018-04-18
The vapor phase adsorption of butanol from ABE fermentation at the head space of the fermenter is an interesting route for the efficient recovery of biobutanol. The presence of gases such as carbon dioxide that are produced during the fermentation process causes a stripping of valuable compounds from the aqueous into the vapor phase. This work studies the effect of the presence of carbon dioxide on the adsorption of butanol at a molecular level. With this aim in mind Monte Carlo simulations were employed to study the adsorption of mixtures containing carbon dioxide, butanol and ethanol. Molecular models for butanol and ethanol that reproduce experimental properties of the molecules such as polarity, vapor-liquid coexistence or liquid density have been developed. Pure component isotherms and heats of adsorption have been computed and compared to experimental data to check the accuracy of the interacting parameters. Adsorption of butanol/ethanol mixtures has been studied in absence and presence of CO2 on two representative materials, a pure silica LTA zeolite and a hydrophobic metal-organic framework ZIF-8. To get a better understanding of the molecular mechanism that governs the adsorption of the targeted mixture in the selected materials, the distribution of the molecules inside the structures was analyzed. The combination of these features allows obtaining a deeper understanding of the process and to identify the role of carbon dioxide in the butanol purification process.
Thermodynamics of hydrogen-helium mixtures at high pressure and finite temperature
NASA Technical Reports Server (NTRS)
Hubbard, W. B.
1972-01-01
A technique is reviewed for calculating thermodynamic quantities for mixtures of light elements at high pressure, in the metallic state. Ensemble averages are calculated with Monte Carlo techniques and periodic boundary conditions. Interparticle potentials are assumed to be coulombic, screened by the electrons in dielectric function theory. This method is quantitatively accurate for alloys at pressures above about 10 Mbar. An alloy of equal parts hydrogen and helium by mass appears to remain liquid and mixed for temperatures above about 3000 K, at pressures of about 15 Mbar. The additive volume law is satisfied to within about 10%, but the Gruneisen equation of state gives poor results. A calculation at 1300 K shows evidence of a hydrogen-helium phase separation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iyetomi, H.; Ogata, S.; Ichimaru, S.
1989-07-01
Equations of state for dense carbon-oxygen (C-O) binary-ionic mixtures (BIM's) appropriate to the interiors of white dwarfs are investigated through Monte Carlo simulations, by solution of relevant integral equations andvariational calculations in the density-functional formalism. It is thereby shown that the internal energies of the C-O BIM solids and fluids both obey precisely the linear mixing formulas. We then present an accurate calculation of the phase diagram associated with freezing transitions in such BIM materials, resulting in a novel prediction of an azeotropic diagram. Discontinuities of the mass density across the azeotropic phase boundaries areevaluated numerically for application to amore » study of white-dwarf evolution.« less
Sarno, Antonio; Mettivier, Giovanni; Tucciariello, Raffaele M; Bliznakova, Kristina; Boone, John M; Sechopoulos, Ioannis; Di Lillo, Francesca; Russo, Paolo
2018-06-07
In cone-beam computed tomography dedicated to the breast (BCT), the mean glandular dose (MGD) is the dose metric of reference, evaluated from the measured air kerma by means of normalized glandular dose coefficients (DgN CT ). This work aimed at computing, for a simple breast model, a set of DgN CT values for monoenergetic and polyenergetic X-ray beams, and at validating the results vs. those for patient specific digital phantoms from BCT scans. We developed a Monte Carlo code for calculation of monoenergetic DgN CT coefficients (energy range 4.25-82.25 keV). The pendant breast was modelled as a cylinder of a homogeneous mixture of adipose and glandular tissue with glandular fractions by mass of 0.1%, 14.3%, 25%, 50% or 100%, enveloped by a 1.45 mm-thick skin layer. The breast diameter ranged between 8 cm and 18 cm. Then, polyenergetic DgN CT coefficients were analytically derived for 49-kVp W-anode spectra (half value layer 1.25-1.50 mm Al), as in a commercial BCT scanner. We compared the homogeneous models to 20 digital phantoms produced from classified 3D breast images. Polyenergetic DgN CT resulted 13% lower than most recent published data. The comparison vs. patient specific breast phantoms showed that the homogeneous cylindrical model leads to a DgN CT percentage difference between -15% and +27%, with an average overestimation of 8%. A dataset of monoenergetic and polyenergetic DgN CT coefficients for BCT was provided. Patient specific breast models showed a different volume distribution of glandular dose and determined a DgN CT 8% lower, on average, than homogeneous breast model. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Allahham, Ayman; Stewart, Peter J; Das, Shyamal C
2013-11-30
Influence of ternary, poorly water-soluble components on the agglomerate strength of cohesive indomethacin mixtures during dissolution was studied to explore the relationship between agglomerate strength and extent of de-agglomeration and dissolution of indomethacin (Ind). Dissolution profiles of Ind from 20% Ind-lactose binary mixtures, and ternary mixtures containing additional dibasic calcium phosphate (1% or 10%; DCP), calcium sulphate (10%) and talc (10%) were determined. Agglomerate strength distributions were estimated by Monte Carlo simulation of particle size, work of cohesion and packing fraction distributions. The agglomerate strength of Ind decreased from 1.19 MPa for the binary Ind mixture to 0.84 MPa for 1DCP:20Ind mixture and to 0.42 MPa for 1DCP:2Ind mixture. Both extent of de-agglomeration, demonstrated by the concentration of the dispersed indomethacin distribution, and extent of dispersion, demonstrated by the particle size of the dispersed indomethacin, were in descending order of 1DCP:2Ind>1DCP:20Ind>binary Ind. The addition of calcium sulphate dihydrate and talc also reduced the agglomerate strength and improved de-agglomeration and dispersion of indomethacin. While not definitively causal, the improved de-agglomeration and dispersion of a poorly water soluble drug by poorly water soluble components was related to the agglomerate strength of the cohesive matrix during dissolution. Copyright © 2013 Elsevier B.V. All rights reserved.
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
Ice Cloud Backscatter Study and Comparison with CALIPSO and MODIS Satellite Data
NASA Technical Reports Server (NTRS)
Ding, Jiachen; Yang, Ping; Holz, Robert E.; Platnick, Steven; Meyer, Kerry G.; Vaughan, Mark A.; Hu, Yongxiang; King, Michael D.
2016-01-01
An invariant imbedding T-matrix (II-TM) method is used to calculate the single-scattering properties of 8-column aggregate ice crystals. The II-TM based backscatter values are compared with those calculated by the improved geometric-optics method (IGOM) to refine the backscattering properties of the ice cloud radiative model used in the MODIS Collection 6 cloud optical property product. The integrated attenuated backscatter-to-cloud optical depth (IAB-ICOD) relation is derived from simulations using a CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite) lidar simulator based on a Monte Carlo radiative transfer model. By comparing the simulation results and co-located CALIPSO and MODIS (Moderate Resolution Imaging Spectroradiometer) observations, the non-uniform zonal distribution of ice clouds over ocean is characterized in terms of a mixture of smooth and rough ice particles. The percentage of the smooth particles is approximately 6 percent and 9 percent for tropical and mid-latitude ice clouds, respectively.
Generalized species sampling priors with latent Beta reinforcements
Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele
2014-01-01
Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462
Karra, Jagadeswara R; Walton, Krista S
2008-08-19
Atomistic grand canonical Monte Carlo simulations were performed in this work to investigate the role of open copper sites of Cu-BTC in affecting the separation of carbon monoxide from binary mixtures containing methane, nitrogen, or hydrogen. Mixtures containing 5%, 50%, or 95% CO were examined. The simulations show that electrostatic interactions between the CO dipole and the partial charges on the metal-organic framework (MOF) atoms dominate the adsorption mechanism. The binary simulations show that Cu-BTC is quite selective for CO over hydrogen and nitrogen for all three mixture compositions at 298 K. The removal of CO from a 5% mixture with methane is slightly enhanced by the electrostatic interactions of CO with the copper sites. However, the pore space of Cu-BTC is large enough to accommodate both molecules at their pure-component loadings, and in general, Cu-BTC exhibits no significant selectivity for CO over methane for the equimolar and 95% mixtures. On the basis of the pure-component and low-concentration behavior of CO, the results indicate that MOFs with open metal sites have the potential for enhancing adsorption separations of molecules of differing polarities, but the pore size relative to the sorbate size will also play a significant role.
Efficient Application of Continuous Fractional Component Monte Carlo in the Reaction Ensemble
2017-01-01
A new formulation of the Reaction Ensemble Monte Carlo technique (RxMC) combined with the Continuous Fractional Component Monte Carlo method is presented. This method is denoted by serial Rx/CFC. The key ingredient is that fractional molecules of either reactants or reaction products are present and that chemical reactions always involve fractional molecules. Serial Rx/CFC has the following advantages compared to other approaches: (1) One directly obtains chemical potentials of all reactants and reaction products. Obtained chemical potentials can be used directly as an independent check to ensure that chemical equilibrium is achieved. (2) Independent biasing is applied to the fractional molecules of reactants and reaction products. Therefore, the efficiency of the algorithm is significantly increased, compared to the other approaches. (3) Changes in the maximum scaling parameter of intermolecular interactions can be chosen differently for reactants and reaction products. (4) The number of fractional molecules is reduced. As a proof of principle, our method is tested for Lennard-Jones systems at various pressures and for various chemical reactions. Excellent agreement was found both for average densities and equilibrium mixture compositions computed using serial Rx/CFC, RxMC/CFCMC previously introduced by Rosch and Maginn (Journal of Chemical Theory and Computation, 2011, 7, 269–279), and the conventional RxMC approach. The serial Rx/CFC approach is also tested for the reaction of ammonia synthesis at various temperatures and pressures. Excellent agreement was found between results obtained from serial Rx/CFC, experimental results from literature, and thermodynamic modeling using the Peng–Robinson equation of state. The efficiency of reaction trial moves is improved by a factor of 2 to 3 (depending on the system) compared to the RxMC/CFCMC formulation by Rosch and Maginn. PMID:28737933
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Apparatus and method for tracking a molecule or particle in three dimensions
Werner, James H [Los Alamos, NM; Goodwin, Peter M [Los Alamos, NM; Lessard, Guillaume [Santa Fe, NM
2009-03-03
An apparatus and method were used to track the movement of fluorescent particles in three dimensions. Control software was used with the apparatus to implement a tracking algorithm for tracking the motion of the individual particles in glycerol/water mixtures. Monte Carlo simulations suggest that the tracking algorithms in combination with the apparatus may be used for tracking the motion of single fluorescent or fluorescently labeled biomolecules in three dimensions.
Kiss, Bálint; Fábián, Balázs; Idrissi, Abdenacer; Szőri, Milán; Jedlovszky, Pál
2017-07-27
The thermodynamic changes that occur upon mixing five models of formamide and three models of water, including the miscibility of these model combinations itself, is studied by performing Monte Carlo computer simulations using an appropriately chosen thermodynamic cycle and the method of thermodynamic integration. The results show that the mixing of these two components is close to the ideal mixing, as both the energy and entropy of mixing turn out to be rather close to the ideal term in the entire composition range. Concerning the energy of mixing, the OPLS/AA_mod model of formamide behaves in a qualitatively different way than the other models considered. Thus, this model results in negative, while the other ones in positive energy of mixing values in combination with all three water models considered. Experimental data supports this latter behavior. Although the Helmholtz free energy of mixing always turns out to be negative in the entire composition range, the majority of the model combinations tested either show limited miscibility, or, at least, approach the miscibility limit very closely in certain compositions. Concerning both the miscibility and the energy of mixing of these model combinations, we recommend the use of the combination of the CHARMM formamide and TIP4P water models in simulations of water-formamide mixtures.
Critical point and phase behavior of the pure fluid and a Lennard-Jones mixture
NASA Astrophysics Data System (ADS)
Potoff, Jeffrey J.; Panagiotopoulos, Athanassios Z.
1998-12-01
Monte Carlo simulations in the grand canonical ensemble were used to obtain liquid-vapor coexistence curves and critical points of the pure fluid and a binary mixture of Lennard-Jones particles. Critical parameters were obtained from mixed-field finite-size scaling analysis and subcritical coexistence data from histogram reweighting methods. The critical parameters of the untruncated Lennard-Jones potential were obtained as Tc*=1.3120±0.0007, ρc*=0.316±0.001 and pc*=0.1279±0.0006. Our results for the critical temperature and pressure are not in agreement with the recent study of Caillol [J. Chem. Phys. 109, 4885 (1998)] on a four-dimensional hypersphere. Mixture parameters were ɛ1=2ɛ2 and σ1=σ2, with Lorentz-Berthelot combining rules for the unlike-pair interactions. We determined the critical point at T*=1.0 and pressure-composition diagrams at three temperatures. Our results have much smaller statistical uncertainties relative to comparable Gibbs ensemble simulations.
Composition of precipitation in remote areas of the world
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galloway, J.N.; Likens, G.E.; Keene, W.C.
1982-10-20
The Global Precipitation Chemistry Project collects precipitation by event to determine composition and processes controlling it in five remote areas. Compositions (excluding seasalt) at St. Georges, Bermuda, were primarily controlled by anthropogenic processes; compositions and acidities at San Carlos, Venezuela, Katherine, Australia, Poker, Flat, Alaska, and Amsterdam Island were controlled by unknown mixtures of natural or anthropogenic processes. Precipitation was acidic; average volume-weighted pH values were 4.8 for Bermuda; 5.0, Alaska; 4.9, Amsterdam Island; 4.8, Australia; 4.8, Venezuela. Acidities at Bermuda and Alaska were from long-range transport of sulfate aerosol; at Venezuela, Australia, and Amsterdam Island, from mixtures of weakmore » organic and strong mineral acids, primarily H/sub 2/SO/sub 4/. Relative proportions of weak to strong acids were largest at Venezuela and lowest at Amsterdam Island. Weak and strong acids were from mixtures of natural and anthropogenic processes. Once contributions from human activities were removed, the lower limit of natural contributions was probably > or =pH 5.« less
Monte Carlo Calculations of Polarized Microwave Radiation Emerging from Cloud Structures
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Roberti, Laura
1998-01-01
The last decade has seen tremendous growth in cloud dynamical and microphysical models that are able to simulate storms and storm systems with very high spatial resolution, typically of the order of a few kilometers. The fairly realistic distributions of cloud and hydrometeor properties that these models generate has in turn led to a renewed interest in the three-dimensional microwave radiative transfer modeling needed to understand the effect of cloud and rainfall inhomogeneities upon microwave observations. Monte Carlo methods, and particularly backwards Monte Carlo methods have shown themselves to be very desirable due to the quick convergence of the solutions. Unfortunately, backwards Monte Carlo methods are not well suited to treat polarized radiation. This study reviews the existing Monte Carlo methods and presents a new polarized Monte Carlo radiative transfer code. The code is based on a forward scheme but uses aliasing techniques to keep the computational requirements equivalent to the backwards solution. Radiative transfer computations have been performed using a microphysical-dynamical cloud model and the results are presented together with the algorithm description.
Non-additive simple potentials for pre-programmed self-assembly
NASA Astrophysics Data System (ADS)
Mendoza, Carlos
2015-03-01
A major goal in nanoscience and nanotechnology is the self-assembly of any desired complex structure with a system of particles interacting through simple potentials. To achieve this objective, intense experimental and theoretical efforts are currently concentrated in the development of the so called ``patchy'' particles. Here we follow a completely different approach and introduce a very accessible model to produce a large variety of pre-programmed two-dimensional (2D) complex structures. Our model consists of a binary mixture of particles that interact through isotropic interactions that is able to self-assemble into targeted lattices by the appropriate choice of a small number of geometrical parameters and interaction strengths. We study the system using Monte Carlo computer simulations and, despite its simplicity, we are able to self assemble potentially useful structures such as chains, stripes, Kagomé, twisted Kagomé, honeycomb, square, Archimedean and quasicrystalline tilings. Our model is designed such that it may be implemented using discotic particles or, alternatively, using exclusively spherical particles interacting isotropically. Thus, it represents a promising strategy for bottom-up nano-fabrication. Partial Financial Support: DGAPA IN-110613.
Kondo, Yumi; Zhao, Yinshan; Petkau, John
2017-05-30
Identification of treatment responders is a challenge in comparative studies where treatment efficacy is measured by multiple longitudinally collected continuous and count outcomes. Existing procedures often identify responders on the basis of only a single outcome. We propose a novel multiple longitudinal outcome mixture model that assumes that, conditionally on a cluster label, each longitudinal outcome is from a generalized linear mixed effect model. We utilize a Monte Carlo expectation-maximization algorithm to obtain the maximum likelihood estimates of our high-dimensional model and classify patients according to their estimated posterior probability of being a responder. We demonstrate the flexibility of our novel procedure on two multiple sclerosis clinical trial datasets with distinct data structures. Our simulation study shows that incorporating multiple outcomes improves the responder identification performance; this can occur even if some of the outcomes are ineffective. Our general procedure facilitates the identification of responders who are comprehensively defined by multiple outcomes from various distributions. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Model-based Clustering of Categorical Time Series with Multinomial Logit Classification
NASA Astrophysics Data System (ADS)
Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea
2010-09-01
A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
Directional Dark Matter Detector Prototype (Time Projection Chamber)
NASA Astrophysics Data System (ADS)
Oliver-Mallory, Kelsey; Garcia-Sciveres, Maurice; Kadyk, John; Lopex-Thibodeaux, Mayra
2013-04-01
The time projection chamber is a mature technology that has emerged as a promising candidate for the directional detection of the WIMP particle. In order to utilize this technology in WIMP detection, the operational parameters must be chosen in the non-ideal regime. A prototype WIMP detector with a 10cm field cage, double GEM amplification, and ATLAS FEI3 pixel chip readout was constructed for the purpose of investigating effects of varying gas pressure in different gas mixtures. The rms radii of ionization clusters of photoelectrons caused by X-rays from a Fe-55 source were measured for several gas pressures between 760torr and 99torr in Ar(70)/ CO2(30), CF4, He(80)/Isobutane(20), and He(80)/CF4(20) mixtures. Average radii were determined from distributions of the data for each gas mixture and pressure, and revealed a negative correlation between pressure and radius in Ar(70)/CO2(30) and He(80)/Isobutane(20) mixtures. Investigation of the pressure-radius measurements are in progress using distributions of photoelectron and auger electron practical ranges (Univ. of Pisa) and diffusion, using the Garfield Monte Carlo program.
Phase transitions in four-dimensional binary hard hypersphere mixtures
NASA Astrophysics Data System (ADS)
Bishop, Marvin; Whitlock, Paula A.
2013-02-01
Previous Monte Carlo investigations of binary hard hyperspheres in four-dimensional mixtures are extended to higher densities where the systems may solidify. The ratios of the diameters of the hyperspheres examined were 0.4, 0.5, and 0.6. Only the 0.4 system shows a clear two phase, solid-liquid transition and the larger component solidifies into a D4 crystal state. Its pair correlation function agrees with that of a one component fluid at an appropriately scaled density. The 0.5 systems exhibit states that are a mix of D4 and A4 regions. The 0.6 systems behave similarly to a jammed state rather than solidifying into a crystal. No demixing into two distinct fluid phases was observed for any of the simulations.
NASA Astrophysics Data System (ADS)
Drake, Jeremy J.; Ercolano, Barbara
2008-08-01
Monte Carlo calculations of the O Kα line fluoresced by coronal X-rays and emitted just above the temperature minimum region of the solar atmosphere have been employed to investigate the use of this feature as an abundance diagnostic. While they are quite weak, we estimate line equivalent widths in the range 0.02-0.2 Å, depending on the X-ray plasma temperature. The line remains essentially uncontaminated by blends for coronal temperatures T <= 3 × 106 K and should be quite observable, with a flux gtrsim2 photons s-1 arcmin-2. Model calculations for solar chemical mixtures with an O abundance adjusted up and down by a factor of 2 indicate 35%-60% changes in O Kα line equivalent width, providing a potentially useful O abundance diagnostic. Sensitivity of equivalent width to differences between recently recommended chemical compositions with "high" and "low" complements of the CNO trio important for interpreting helioseismological observations is less acute, amounting to 20%-26% at coronal temperatures T <= 2 × 106 K. While still feasible for discriminating between these two mixtures, uncertainties in measured line equivalent widths and in the models used for interpretation would need to be significantly less than 20%. Provided a sensitive X-ray spectrometer with resolving power >=1000 and suitably well-behaved instrumental profile can be built, X-ray fluorescence presents a viable means for resolving the solar "oxygen crisis."
Strategy for good dispersion of well-defined tetrapods in semiconducting polymer matrices.
Lim, Jaehoon; Borg, Lisa zur; Dolezel, Stefan; Schmid, Friederike; Char, Kookheon; Zentel, Rudolf
2014-10-01
The morphology or dispersion control in inorganic/organic hybrid systems is studied, which consist of monodisperse CdSe tetrapods (TPs) with grafted semiconducting block copolymers with excess polymers of the same type. Tetrapod arm-length and amount of polymer loading are varied in order to find the ideal morphology for hybrid solar cells. Additionally, polymers without anchor groups are mixed with the TPs to study the effect of such anchor groups on the hybrid morphology. A numerical model is developed and Monte Carlo simulations to study the basis of compatibility or dispersibility of TPs in polymer matrices are performed. The simulations show that bare TPs tend to form clusters in the matrix of excess polymers. The clustering is significantly reduced after grafting polymer chains to the TPs, which is confirmed experimentally. Transmission electron microscopy reveals that the block copolymer-TP mixtures ("hybrids") show much better film qualities and TP distributions within the films when compared with the homopolymer-TP mixtures ("blends"), representing massive aggregations and cracks in the films. This grafting-to approach for the modification of TPs significantly improves the dispersion of the TPs in matrices of "excess" polymers up to the arm length of 100 nm. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Naine, Tarun Bharath; Gundawar, Manoj Kumar
2017-09-01
We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.
Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D
2010-12-01
To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
Zhang, Xia; Hu, Changqin
2017-09-08
Penicillins are typical of complex ionic samples which likely contain large number of degradation-related impurities (DRIs) with different polarities and charge properties. It is often a challenge to develop selective and robust high performance liquid chromatography (HPLC) methods for the efficient separation of all DRIs. In this study, an analytical quality by design (AQbD) approach was proposed for stability-indicating method development of cloxacillin. The structures, retention and UV characteristics rules of penicillins and their impurities were summarized and served as useful prior knowledge. Through quality risk assessment and screen design, 3 critical process parameters (CPPs) were defined, including 2 mixture variables (MVs) and 1 process variable (PV). A combined mixture-process variable (MPV) design was conducted to evaluate the 3 CPPs simultaneously and a response surface methodology (RSM) was used to achieve the optimal experiment parameters. A dual gradient elution was performed to change buffer pH, mobile-phase type and strength simultaneously. The design spaces (DSs) was evaluated using Monte Carlo simulation to give their possibility of meeting the specifications of CQAs. A Plackett-Burman design was performed to test the robustness around the working points and to decide the normal operating ranges (NORs). Finally, validation was performed following International Conference on Harmonisation (ICH) guidelines. To our knowledge, this is the first study of using MPV design and dual gradient elution to develop HPLC methods and improve separations for complex ionic samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Vectorized Monte Carlo methods for reactor lattice analysis
NASA Technical Reports Server (NTRS)
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Analysis of Naval Ammunition Stock Positioning
2015-12-01
model takes once the Monte -Carlo simulation determines the assigned probabilities for site-to-site locations. Column two shows how the simulation...stockpiles and positioning them at coastal Navy facilities. A Monte -Carlo simulation model was developed to simulate expected cost and delivery...TERMS supply chain management, Monte -Carlo simulation, risk, delivery performance, stock positioning 15. NUMBER OF PAGES 85 16. PRICE CODE 17
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Lam, Adam; Brantley, Patrick; Malvagi, Fausto; Palmer, Todd; Zoia, Andrea
2018-01-01
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore-Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using this algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
Larmier, Coline; Lam, Adam; Brantley, Patrick; ...
2017-09-27
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmier, Coline; Lam, Adam; Brantley, Patrick
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
Computational study of ibuprofen removal from water by adsorption in realistic activated carbons.
Bahamon, Daniel; Carro, Leticia; Guri, Sonia; Vega, Lourdes F
2017-07-15
Molecular simulations using the Grand Canonical Monte Carlo (GCMC) method have been performed in order to obtain physical insights on how the interaction between ibuprofen (IBP) and activated carbons (ACs) in aqueous mixtures affects IBP removal from water by ACs. A nanoporous carbon model based on units of polyaromatic molecules with different number of rings, defects and polar-oxygenated sites is described. Individual effects of factors such as porous features and chemical heterogeneities in the adsorbents are investigated and quantified. Results are in good agreement with experimental adsorption data, highlightening the ability of GCMC simulation to describe the macroscopic adsorption performance in drug removal applications, while also providing additional insights into the IBP/water adsorption mechanism. The simulation results allow finding the optimal type of activated carbon material for separating this pollutant in water treatment. Copyright © 2017 Elsevier Inc. All rights reserved.
Bayesian Peak Picking for NMR Spectra
Cheng, Yichen; Gao, Xin; Liang, Faming
2013-01-01
Protein structure determination is a very important topic in structural genomics, which helps people to understand varieties of biological functions such as protein-protein interactions, protein–DNA interactions and so on. Nowadays, nuclear magnetic resonance (NMR) has often been used to determine the three-dimensional structures of protein in vivo. This study aims to automate the peak picking step, the most important and tricky step in NMR structure determination. We propose to model the NMR spectrum by a mixture of bivariate Gaussian densities and use the stochastic approximation Monte Carlo algorithm as the computational tool to solve the problem. Under the Bayesian framework, the peak picking problem is casted as a variable selection problem. The proposed method can automatically distinguish true peaks from false ones without preprocessing the data. To the best of our knowledge, this is the first effort in the literature that tackles the peak picking problem for NMR spectrum data using Bayesian method. PMID:24184964
NASA Astrophysics Data System (ADS)
Grosenick, Dirk; Cantow, Kathleen; Arakelyan, Karen; Wabnitz, Heidrun; Flemming, Bert; Skalweit, Angela; Ladwig, Mechthild; Macdonald, Rainer; Niendorf, Thoralf; Seeliger, Erdmann
2015-07-01
We have developed a hybrid approach to investigate the dynamics of perfusion and oxygenation in the kidney of rats under pathophysiologically relevant conditions. Our approach combines near-infrared spectroscopy to quantify hemoglobin concentration and oxygen saturation in the renal cortex, and an invasive probe method for measuring total renal blood flow by an ultrasonic probe, perfusion by laser-Doppler fluxmetry, and tissue oxygen tension via fluorescence quenching. Hemoglobin concentration and oxygen saturation were determined from experimental data by a Monte Carlo model. The hybrid approach was applied to investigate and compare temporal changes during several types of interventions such as arterial and venous occlusions, as well as hyperoxia, hypoxia and hypercapnia induced by different mixtures of the inspired gas. The approach was also applied to study the effects of the x-ray contrast medium iodixanol on the kidney.
Wulff, Jorg; Keil, Boris; Auvanis, Diyala; Heverhagen, Johannes T; Klose, Klaus Jochen; Zink, Klemens
2008-01-01
The present study aims at the investigation of eye lens shielding of different composition for the use in computed tomography examinations. Measurements with thermo-luminescent dosimeters and a simple cylindrical waterfilled phantom were performed as well as Monte Carlo simulations with an equivalent geometry. Besides conventional shielding made of Bismuth coated latex, a new shielding with a mixture of metallic components was analyzed. This new material leads to an increased dose reduction compared to the Bismuth shielding. Measured and Monte Carlo simulated dose reductions are in good agreement and amount to 34% for the Bismuth shielding and 46% for the new material. For simulations the EGSnrc code system was used and a new application CTDOSPP was developed for the simulation of the computed tomography examination. The investigations show that a satisfying agreement between simulation and measurement with the chosen geometries of this study could only be achieved, when transport of secondary electrons was accounted for in the simulation. The amount of scattered radiation due to the protector by fluorescent photons was analyzed and is larger for the new material due to the smaller atomic number of the metallic components.
Monte Carlo modelling the dosimetric effects of electrode material on diamond detectors.
Baluti, Florentina; Deloar, Hossain M; Lansley, Stuart P; Meyer, Juergen
2015-03-01
Diamond detectors for radiation dosimetry were modelled using the EGSnrc Monte Carlo code to investigate the influence of electrode material and detector orientation on the absorbed dose. The small dimensions of the electrode/diamond/electrode detector structure required very thin voxels and the use of non-standard DOSXYZnrc Monte Carlo model parameters. The interface phenomena was investigated by simulating a 6 MV beam and detectors with different electrode materials, namely Al, Ag, Cu and Au, with thickens of 0.1 µm for the electrodes and 0.1 mm for the diamond, in both perpendicular and parallel detector orientation with regards to the incident beam. The smallest perturbations were observed for the parallel detector orientation and Al electrodes (Z = 13). In summary, EGSnrc Monte Carlo code is well suited for modelling small detector geometries. The Monte Carlo model developed is a useful tool to investigate the dosimetric effects caused by different electrode materials. To minimise perturbations cause by the detector electrodes, it is recommended that the electrodes should be made from a low-atomic number material and placed parallel to the beam direction.
Hao, Jie; Astle, William; De Iorio, Maria; Ebbels, Timothy M D
2012-08-01
Nuclear Magnetic Resonance (NMR) spectra are widely used in metabolomics to obtain metabolite profiles in complex biological mixtures. Common methods used to assign and estimate concentrations of metabolites involve either an expert manual peak fitting or extra pre-processing steps, such as peak alignment and binning. Peak fitting is very time consuming and is subject to human error. Conversely, alignment and binning can introduce artefacts and limit immediate biological interpretation of models. We present the Bayesian automated metabolite analyser for NMR spectra (BATMAN), an R package that deconvolutes peaks from one-dimensional NMR spectra, automatically assigns them to specific metabolites from a target list and obtains concentration estimates. The Bayesian model incorporates information on characteristic peak patterns of metabolites and is able to account for shifts in the position of peaks commonly seen in NMR spectra of biological samples. It applies a Markov chain Monte Carlo algorithm to sample from a joint posterior distribution of the model parameters and obtains concentration estimates with reduced error compared with conventional numerical integration and comparable to manual deconvolution by experienced spectroscopists. http://www1.imperial.ac.uk/medicine/people/t.ebbels/ t.ebbels@imperial.ac.uk.
Ovanesyan, Zaven; Fenley, Marcia O.; Guerrero-García, Guillermo Iván; Olvera de la Cruz, Mónica
2014-01-01
The ionic atmosphere around a nucleic acid regulates its stability in aqueous salt solutions. One major source of complexity in biological activities involving nucleic acids arises from the strong influence of the surrounding ions and water molecules on their structural and thermodynamic properties. Here, we implement a classical density functional theory for cylindrical polyelectrolytes embedded in aqueous electrolytes containing explicit (neutral hard sphere) water molecules at experimental solvent concentrations. Our approach allows us to include ion correlations as well as solvent and ion excluded volume effects for studying the structural and thermodynamic properties of highly charged cylindrical polyelectrolytes. Several models of size and charge asymmetric mixtures of aqueous electrolytes at physiological concentrations are studied. Our results are in good agreement with Monte Carlo simulations. Our numerical calculations display significant differences in the ion density profiles for the different aqueous electrolyte models studied. However, similar results regarding the excess number of ions adsorbed to the B-DNA molecule are predicted by our theoretical approach for different aqueous electrolyte models. These findings suggest that ion counting experimental data should not be used alone to validate the performance of aqueous DNA-electrolyte models. PMID:25494770
SABRINA - An interactive geometry modeler for MCNP (Monte Carlo Neutron Photon)
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.; Murphy, J.
SABRINA is an interactive three-dimensional geometry modeler developed to produce complicated models for the Los Alamos Monte Carlo Neutron Photon program MCNP. SABRINA produces line drawings and color-shaded drawings for a wide variety of interactive graphics terminals. It is used as a geometry preprocessor in model development and as a Monte Carlo particle-track postprocessor in the visualization of complicated particle transport problem. SABRINA is written in Fortran 77 and is based on the Los Alamos Common Graphics System, CGS. 5 refs., 2 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less
Monte Carlo simulations of kagome lattices with magnetic dipolar interactions
NASA Astrophysics Data System (ADS)
Plumer, Martin; Holden, Mark; Way, Andrew; Saika-Voivod, Ivan; Southern, Byron
Monte Carlo simulations of classical spins on the two-dimensional kagome lattice with only dipolar interactions are presented. In addition to revealing the sixfold-degenerate ground state, the nature of the finite-temperature phase transition to long-range magnetic order is discussed. Low-temperature states consisting of mixtures of degenerate ground-state configurations separated by domain walls can be explained as a result of competing exchange-like and shape-anisotropy-like terms in the dipolar coupling. Fluctuations between pairs of degenerate spin configurations are found to persist well into the ordered state as the temperature is lowered until locking in to a low-energy state. Results suggest that the system undergoes a continuous phase transition at T ~ 0 . 43 in agreement with previous MC simulations but the nature of the ordering process differs. Preliminary results which extend this analysis to the 3D fcc ABC-stacked kagome systems will be presented.
NASA Technical Reports Server (NTRS)
Liechty, Derek S.; Burt, Jonathan M.
2016-01-01
There are many flows fields that span a wide range of length scales where regions of both rarefied and continuum flow exist and neither direct simulation Monte Carlo (DSMC) nor computational fluid dynamics (CFD) provide the appropriate solution everywhere. Recently, a new viscous collision limited (VCL) DSMC technique was proposed to incorporate effects of physical diffusion into collision limiter calculations to make the low Knudsen number regime normally limited to CFD more tractable for an all-particle technique. This original work had been derived for a single species gas. The current work extends the VCL-DSMC technique to gases with multiple species. Similar derivations were performed to equate numerical and physical transport coefficients. However, a more rigorous treatment of determining the mixture viscosity is applied. In the original work, consideration was given to internal energy non-equilibrium, and this is also extended in the current work to chemical non-equilibrium.
Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model
NASA Astrophysics Data System (ADS)
Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.
2018-04-01
While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.
A modified Monte Carlo model for the ionospheric heating rates
NASA Technical Reports Server (NTRS)
Mayr, H. G.; Fontheim, E. G.; Robertson, S. C.
1972-01-01
A Monte Carlo method is adopted as a basis for the derivation of the photoelectron heat input into the ionospheric plasma. This approach is modified in an attempt to minimize the computation time. The heat input distributions are computed for arbitrarily small source elements that are spaced at distances apart corresponding to the photoelectron dissipation range. By means of a nonlinear interpolation procedure their individual heating rate distributions are utilized to produce synthetic ones that fill the gaps between the Monte Carlo generated distributions. By varying these gaps and the corresponding number of Monte Carlo runs the accuracy of the results is tested to verify the validity of this procedure. It is concluded that this model can reduce the computation time by more than a factor of three, thus improving the feasibility of including Monte Carlo calculations in self-consistent ionosphere models.
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
NASA Astrophysics Data System (ADS)
Crevillén-García, D.; Power, H.
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Crevillén-García, D; Power, H
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Power, H.
2017-01-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974
An analytical model for enantioseparation process in capillary electrophoresis
NASA Astrophysics Data System (ADS)
Ranzuglia, G. A.; Manzi, S. J.; Gomez, M. R.; Belardinelli, R. E.; Pereyra, V. D.
2017-12-01
An analytical model to explain the mobilities of enantiomer binary mixture in capillary electrophoresis experiment is proposed. The model consists in a set of kinetic equations describing the evolution of the populations of molecules involved in the enantioseparation process in capillary electrophoresis (CE) is proposed. These equations take into account the asymmetric driven migration of enantiomer molecules, chiral selector and the temporary diastomeric complexes, which are the products of the reversible reaction between the enantiomers and the chiral selector. The solution of these equations gives the spatial and temporal distribution of each species in the capillary, reproducing a typical signal of the electropherogram. The mobility, μ, of each specie is obtained by the position of the maximum (main peak) of their respective distributions. Thereby, the apparent electrophoretic mobility difference, Δμ, as a function of chiral selector concentration, [ C ] , can be measured. The behaviour of Δμ versus [ C ] is compared with the phenomenological model introduced by Wren and Rowe in J. Chromatography 1992, 603, 235. To test the analytical model, a capillary electrophoresis experiment for the enantiomeric separation of the (±)-chlorpheniramine β-cyclodextrin (β-CD) system is used. These data, as well as, other obtained from literature are in closed agreement with those obtained by the model. All these results are also corroborate by kinetic Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Cordeiro, João M. M.; Soper, Alan K.
2013-01-01
The solvation of N-methylformamide (NMF) by dimethylsulfoxide (DMSO) in a 20% NMF/DMSO liquid mixture is investigated using a combination of neutron diffraction augmented with isotopic substitution and Monte Carlo simulations. The aim is to investigate the solute-solvent interactions and the structure of the solution. The results point to the formation of a hydrogen bond (H-bond) between the H bonded to the N of the amine group of NMF and the O of DMSO particularly strong when compared with other H-bonded liquids. Moreover, a second cooperative H-bond is identified with the S atom of DMSO. As a consequence of these H-bonds, molecules of NMF and DMSO are rather rigidly connected, establishing very stable dimmers in the mixture and very well organized first and second solvation shells.
NASA Astrophysics Data System (ADS)
Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard
2007-06-01
The simultaneous use of the Reaction Ensemble Monte Carlo (ReMC) method and the Adaptative Erpenbeck EOS (AE-EOS) method allows us to calculate direclty the thermodynamical and chemical equilibrium of a mixture on the hugoniot curve. The ReMC method allow to reach chemical equilibrium of detonation products and the AE-EOS method constraints ths system to satisfy the Hugoniot relation. Once the Crussard curve of detonation products has been established, CJ state properties may be calculated. An additional NPT simulation is performed at CJ conditions in order to compute derivative thermodynamic quantities like Cp, Cv, Gruneisen gama, sound velocity, and compressibility factor. Several explosives has been studied, of which PETN, nitromethane, tetranitromethane, and hexanitroethane. In these first simulations, solid carbon is eventually treated using an EOS.
Performance of the ATLAS Transition Radiation Tracker in Run 1 of the LHC: tracker properties
Aaboud, M.; Aad, G.; Abbott, B.; ...
2017-05-03
The tracking performance parameters of the ATLAS Transition Radiation Tracker (TRT) as part of the ATLAS inner detector are described in this paper for different data-taking conditions in proton-proton, proton-lead and lead-lead collisions at the Large Hadron Collider (LHC). The performance is studied using data collected during the first period of LHC operation (Run 1) and is compared with Monte Carlo simulations. The performance of the TRT, operating with two different gas mixtures (xenon-based and argon-based) and its dependence on the TRT occupancy is presented. Furthermore, these studies show that the tracking performance of the TRT is similar for themore » two gas mixtures and that a significant contribution to the particle momentum resolution is made by the TRT up to high particle densities.« less
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport
of viruses in the unsaturated zone. A database of input parameters
allowed Monte Carlo analysis with the model. The resulting kernel
densities of predicted attenuation during percolation indicated very ...
NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media
NASA Astrophysics Data System (ADS)
Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique
2017-08-01
NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.
Population annealing simulations of a binary hard-sphere mixture
NASA Astrophysics Data System (ADS)
Callaham, Jared; Machta, Jonathan
2017-06-01
Population annealing is a sequential Monte Carlo scheme well suited to simulating equilibrium states of systems with rough free energy landscapes. Here we use population annealing to study a binary mixture of hard spheres. Population annealing is a parallel version of simulated annealing with an extra resampling step that ensures that a population of replicas of the system represents the equilibrium ensemble at every packing fraction in an annealing schedule. The algorithm and its equilibration properties are described, and results are presented for a glass-forming fluid composed of a 50/50 mixture of hard spheres with diameter ratio of 1.4:1. For this system, we obtain precise results for the equation of state in the glassy regime up to packing fractions φ ≈0.60 and study deviations from the Boublik-Mansoori-Carnahan-Starling-Leland equation of state. For higher packing fractions, the algorithm falls out of equilibrium and a free volume fit predicts jamming at packing fraction φ ≈0.667 . We conclude that population annealing is an effective tool for studying equilibrium glassy fluids and the jamming transition.
Composition of precipitation in remote areas of the world
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galloway, J.N.; Likens, G.E.; Keene, W.C.
1982-10-20
The Global Precipitation Chemistry Project collects precipitation by event to determine composition and processes controlling it in five remote areas. Compositions (excluding sea-salt) at St. Georges, Bermuda, were primarily controlled by anthropogenic processes; composition and acidities at San Carlos, Venezuela, Katherine, Australia, Poker Flat, Alaska, and Amsterdam Island were controlled by unknown mixtures of natural or anthropogenic processes. Precipitation was acidic; average volume-weighted pH values were 4.8 for Bermuda; 5.0, Alaska; 4.9, Amsterdam Island; 4.8, Australia; 4.8, Venezuela. Acidities at Bermuda and Alaska were from long-range transport of sulfate aerosol; at Venezuela, Australia, and Amsterdam Island, from mixtures of weakmore » organic and strong mineral acids, primarily H/sub 2/SO/sub 4/. Relative proportions of weak to strong acids were largest at Venezuela and lowest at Amsterdam Island. Weak and strong acids were from mixtures of natural and anthropogenic processes. Once contributions from human activities were removed, the lower limit of natural contributions was probably greater than or equal to pH 5.« less
NASA Astrophysics Data System (ADS)
Prabhu Verleker, Akshay; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M.
2015-03-01
The purpose of this study is to develop an alternate empirical approach to estimate near-infra-red (NIR) photon propagation and quantify optically induced drug release in brain metastasis, without relying on computationally expensive Monte Carlo techniques (gold standard). Targeted drug delivery with optically induced drug release is a noninvasive means to treat cancers and metastasis. This study is part of a larger project to treat brain metastasis by delivering lapatinib-drug-nanocomplexes and activating NIR-induced drug release. The empirical model was developed using a weighted approach to estimate photon scattering in tissues and calibrated using a GPU based 3D Monte Carlo. The empirical model was developed and tested against Monte Carlo in optical brain phantoms for pencil beams (width 1mm) and broad beams (width 10mm). The empirical algorithm was tested against the Monte Carlo for different albedos along with diffusion equation and in simulated brain phantoms resembling white-matter (μs'=8.25mm-1, μa=0.005mm-1) and gray-matter (μs'=2.45mm-1, μa=0.035mm-1) at wavelength 800nm. The goodness of fit between the two models was determined using coefficient of determination (R-squared analysis). Preliminary results show the Empirical algorithm matches Monte Carlo simulated fluence over a wide range of albedo (0.7 to 0.99), while the diffusion equation fails for lower albedo. The photon fluence generated by empirical code matched the Monte Carlo in homogeneous phantoms (R2=0.99). While GPU based Monte Carlo achieved 300X acceleration compared to earlier CPU based models, the empirical code is 700X faster than the Monte Carlo for a typical super-Gaussian laser beam.
Thrane, Jan-Erik; Kyle, Marcia; Striebel, Maren; Haande, Sigrid; Grung, Merete; Rohrlack, Thomas; Andersen, Tom
2015-01-01
The Gauss-peak spectra (GPS) method represents individual pigment spectra as weighted sums of Gaussian functions, and uses these to model absorbance spectra of phytoplankton pigment mixtures. We here present several improvements for this type of methodology, including adaptation to plate reader technology and efficient model fitting by open source software. We use a one-step modeling of both pigment absorption and background attenuation with non-negative least squares, following a one-time instrument-specific calibration. The fitted background is shown to be higher than a solvent blank, with features reflecting contributions from both scatter and non-pigment absorption. We assessed pigment aliasing due to absorption spectra similarity by Monte Carlo simulation, and used this information to select a robust set of identifiable pigments that are also expected to be common in natural samples. To test the method’s performance, we analyzed absorbance spectra of pigment extracts from sediment cores, 75 natural lake samples, and four phytoplankton cultures, and compared the estimated pigment concentrations with concentrations obtained using high performance liquid chromatography (HPLC). The deviance between observed and fitted spectra was generally very low, indicating that measured spectra could successfully be reconstructed as weighted sums of pigment and background components. Concentrations of total chlorophylls and total carotenoids could accurately be estimated for both sediment and lake samples, but individual pigment concentrations (especially carotenoids) proved difficult to resolve due to similarity between their absorbance spectra. In general, our modified-GPS method provides an improvement of the GPS method that is a fast, inexpensive, and high-throughput alternative for screening of pigment composition in samples of phytoplankton material. PMID:26359659
NASA Astrophysics Data System (ADS)
Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.
2018-05-01
Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although the observed seasonal cycles were found to fall within the confidence limits of the ACCMIP members, this was because the model seasonal cycles spanned extremely wide ranges and there was no single ACCMIP member that performed best for each station. Further work is required to examine the parameterisation of convective mixing in the models to see if this erodes the isolation of the marine boundary layer from the free troposphere and thus hides the models' real ability to reproduce ozone seasonal cycles over marine stations.
Wang, Wenjuan; Peng, Xuan; Cao, Dapeng
2011-06-01
Adsorption of H(2)S and SO(2) pure gases and their selective capture from the H(2)S-CH(4), H(2)S-CO(2), SO(2)-N(2), and SO(2)-CO(2) binary mixtures by the single-walled carbon nanotubes (SWNT) are investigated via using the grand canonical Monte Carlo (GCMC) method. It is found that the (20, 20) SWNT with larger diameter shows larger capacity for H(2)S and SO(2) pure gases at T = 303 K, in which the uptakes reach 16.31 and 16.03 mmol/g, respectively. However, the (6,6) SWNT with small diameter exhibits the largest selectivity for binary mixtures containing trace sulfur gases at T = 303 K and P = 100 kPa. By investigating the effect of pore size on the separation of gas mixtures, we found that the optimized pore size is 0.81 nm for separation of H(2)S-CH(4), H(2)S-CO(2), and SO(2)-N(2) binary mixtures, while it is 1.09 nm for the SO(2)-CO(2) mixture. The effects of concentration and temperature on the selectivity of sulfide are also studied at the optimal pore size. It is found that the concentration (ppm) of sulfur components has little effect on selectivity of SWNTs for these binary mixtures. However, the selectivity decreases obviously with the increase of temperature. To improve the adsorption capacities, we further modify the surface of SWNTs with the functional groups. The selectivities of H(2)S-CO(2) and SO(2)-CO(2) mixtures are basically uninfluenced by the site density, while the increase of site density can improve the selectivity of H(2)S-CH(4) mixture doubly. It is expected that this work could provide useful information for sulfur gas capture.
Martín-Calvo, Ana; García-Pérez, Elena; García-Sánchez, Almudena; Bueno-Pérez, Rocío; Hamad, Said; Calero, Sofia
2011-06-21
We have used interatomic potential-based simulations to study the removal of carbon tetrachloride from air at 298 K, using Cu-BTC metal organic framework. We have developed new sets of Lennard-Jones parameters that accurately describe the vapour-liquid equilibrium curves of carbon tetrachloride and the main components from air (oxygen, nitrogen, and argon). Using these parameters we performed Monte Carlo simulations for the following systems: (a) single component adsorption of carbon tetrachloride, oxygen, nitrogen, and argon molecules, (b) binary Ar/CCl(4), O(2)/CCl(4), and N(2)/CCl(4) mixtures with bulk gas compositions 99 : 1 and 99.9 : 0.1, (c) ternary O(2)/N(2)/Ar mixtures with both, equimolar and 21 : 78 : 1 bulk gas composition, (d) quaternary mixture formed by 0.1% of CCl(4) pollutant, 20.979% O(2), 77.922% N(2), and 0.999% Ar, and (e) five-component mixtures corresponding to 0.1% of CCl(4) pollutant in air with relative humidity ranging from 0 to 100%. The carbon tetrachloride adsorption selectivity and the self-diffusivity and preferential sitting of the different molecules in the structure are studied for all the systems.
NASA Astrophysics Data System (ADS)
Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard
2007-08-01
In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at PCJ and TCJ is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Grüneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.
Bourasseau, Emeric; Dubois, Vincent; Desbiens, Nicolas; Maillet, Jean-Bernard
2007-08-28
In this work, we used simultaneously the reaction ensemble Monte Carlo (ReMC) method and the adaptive Erpenbeck equation of state (AE-EOS) method to directly calculate the thermodynamic and chemical equilibria of mixtures of detonation products on the Hugoniot curve. The ReMC method [W. R. Smith and B. Triska, J. Chem. Phys. 100, 3019 (1994)] allows us to reach the chemical equilibrium of a reacting mixture, and the AE-EOS method [J. J. Erpenbeck, Phys. Rev. A 46, 6406 (1992)] constrains the system to satisfy the Hugoniot relation. Once the Hugoniot curve of the detonation product mixture is established, the Chapman-Jouguet (CJ) state of the explosive can be determined. A NPT simulation at P(CJ) and T(CJ) is then performed in order to calculate direct thermodynamic properties and the following derivative properties of the system using a fluctuation method: calorific capacities, sound velocity, and Gruneisen coefficient. As the chemical composition fluctuates, and the number of particles is not necessarily constant in this ensemble, a fluctuation formula has been developed to take into account the fluctuations of mole number and composition. This type of calculation has been applied to several usual energetic materials: nitromethane, tetranitromethane, hexanitroethane, PETN, and RDX.
Shear viscosity for a heated granular binary mixture at low density.
Montanero, José María; Garzó, Vicente
2003-02-01
The shear viscosity for a heated granular binary mixture of smooth hard spheres at low density is analyzed. The mixture is heated by the action of an external driving force (Gaussian thermostat) that exactly compensates for cooling effects associated with the dissipation of collisions. The study is made from the Boltzmann kinetic theory, which is solved by using two complementary approaches. First, a normal solution of the Boltzmann equation via the Chapman-Enskog method is obtained up to first order in the spatial gradients. The mass, heat, and momentum fluxes are determined and the corresponding transport coefficients identified. As in the free cooling case [V. Garzó and J. W. Dufty, Phys. Fluids 14, 1476 (2002)], practical evaluation requires a Sonine polynomial approximation, and here it is mainly illustrated in the case of the shear viscosity. Second, to check the accuracy of the Chapman-Enskog results, the Boltzmann equation is numerically solved by means of the direct simulation Monte Carlo method. The simulation is performed for a system under uniform shear flow, using the Gaussian thermostat to control inelastic cooling. The comparison shows an excellent agreement between theory and simulation over a wide range of values of the restitution coefficients and the parameters of the mixture (masses, concentrations, and sizes).
Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo
NASA Astrophysics Data System (ADS)
Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik
2018-05-01
Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.
Monte Carlo Simulation of Microscopic Stock Market Models
NASA Astrophysics Data System (ADS)
Stauffer, Dietrich
Computer simulations with random numbers, that is, Monte Carlo methods, have been considerably applied in recent years to model the fluctuations of stock market or currency exchange rates. Here we concentrate on the percolation model of Cont and Bouchaud, to simulate, not to predict, the market behavior.
Monte Carlo model of light transport in multi-layered tubular organs
NASA Astrophysics Data System (ADS)
Zhang, Yunyao; Zhu, Jingping; Zhang, Ning
2017-02-01
We present a Monte Carlo static light migration model (Endo-MCML) to simulate endoscopic optical spectroscopy for tubular organs such as esophagus and colon. The model employs multi-layered hollow cylinder which emitting and receiving light both from the inner boundary to meet the conditions of endoscopy. Inhomogeneous sphere can be added in tissue layers to model cancer or other abnormal changes. The 3D light distribution and exit angle would be recorded as results. The accuracy of the model has been verified by Multi-layered Monte Carlo(MCML) method and NIRFAST. This model can be used for the forward modeling of light transport during endoscopically diffuse optical spectroscopy, light scattering spectroscopy, reflectance spectroscopy and other static optical detection or imaging technologies.
Local order and crystallization of dense polydisperse hard spheres
NASA Astrophysics Data System (ADS)
Coslovich, Daniele; Ozawa, Misaki; Berthier, Ludovic
2018-04-01
Computer simulations give precious insight into the microscopic behavior of supercooled liquids and glasses, but their typical time scales are orders of magnitude shorter than the experimentally relevant ones. We recently closed this gap for a class of models of size polydisperse fluids, which we successfully equilibrate beyond laboratory time scales by means of the swap Monte Carlo algorithm. In this contribution, we study the interplay between compositional and geometric local orders in a model of polydisperse hard spheres equilibrated with this algorithm. Local compositional order has a weak state dependence, while local geometric order associated to icosahedral arrangements grows more markedly but only at very high density. We quantify the correlation lengths and the degree of sphericity associated to icosahedral structures and compare these results to those for the Wahnström Lennard-Jones mixture. Finally, we analyze the structure of very dense samples that partially crystallized following a pattern incompatible with conventional fractionation scenarios. The crystal structure has the symmetry of aluminum diboride and involves a subset of small and large particles with size ratio approximately equal to 0.5.
Poisson mixture model for measurements using counting.
Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz
2010-03-01
Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.
CEC-normalized clay-water sorption isotherm
NASA Astrophysics Data System (ADS)
Woodruff, W. F.; Revil, A.
2011-11-01
A normalized clay-water isotherm model based on BET theory and describing the sorption and desorption of the bound water in clays, sand-clay mixtures, and shales is presented. Clay-water sorption isotherms (sorption and desorption) of clayey materials are normalized by their cation exchange capacity (CEC) accounting for a correction factor depending on the type of counterion sorbed on the mineral surface in the so-called Stern layer. With such normalizations, all the data collapse into two master curves, one for sorption and one for desorption, independent of the clay mineralogy, crystallographic considerations, and bound cation type; therefore, neglecting the true heterogeneity of water sorption/desorption in smectite. The two master curves show the general hysteretic behavior of the capillary pressure curve at low relative humidity (below 70%). The model is validated against several data sets obtained from the literature comprising a broad range of clay types and clay mineralogies. The CEC values, derived by inverting the sorption/adsorption curves using a Markov chain Monte Carlo approach, are consistent with the CEC associated with the clay mineralogy.
First scattered-light image of the debris disk around HD 131835 with the Gemini Planet Imager
Hung, Li -Wei; Duchêne, Gaspard; Arriaga, Pauline; ...
2015-12-09
Here, we present the first scattered-light image of the debris disk around HD 131835 in the H band using the Gemini Planet Imager. HD 131835 is a ~15 Myr old A2IV star at a distance of ~120 pc in the Sco-Cen OB association. We detect the disk only in polarized light and place an upper limit on the peak total intensity. No point sources resembling exoplanets were identified. Compared to its mid-infrared thermal emission, in scattered light the disk shows similar orientation but different morphology. The scattered-light disk extends from ~75 to ~210 AU in the disk plane with roughlymore » flat surface density. Our Monte Carlo radiative transfer model can describe the observations with a model disk composed of a mixture of silicates and amorphous carbon. In addition to the obvious brightness asymmetry due to stronger forward scattering, we discover a weak brightness asymmetry along the major axis, with the northeast side being 1.3 times brighter than the southwest side at a 3σ level.« less
FIRST SCATTERED-LIGHT IMAGE OF THE DEBRIS DISK AROUND HD 131835 WITH THE GEMINI PLANET IMAGER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hung, Li-Wei; Arriaga, Pauline; Fitzgerald, Michael P.
2015-12-10
We present the first scattered-light image of the debris disk around HD 131835 in the H band using the Gemini Planet Imager. HD 131835 is a ∼15 Myr old A2IV star at a distance of ∼120 pc in the Sco-Cen OB association. We detect the disk only in polarized light and place an upper limit on the peak total intensity. No point sources resembling exoplanets were identified. Compared to its mid-infrared thermal emission, in scattered light the disk shows similar orientation but different morphology. The scattered-light disk extends from ∼75 to ∼210 AU in the disk plane with roughly flatmore » surface density. Our Monte Carlo radiative transfer model can describe the observations with a model disk composed of a mixture of silicates and amorphous carbon. In addition to the obvious brightness asymmetry due to stronger forward scattering, we discover a weak brightness asymmetry along the major axis, with the northeast side being 1.3 times brighter than the southwest side at a 3σ level.« less
Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Degroh, Kim K.; Sechkar, Edward A.
1992-01-01
Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) will assist in understanding the mechanisms involved, and will lead to improved reliability in predicting in-space durability of materials based on ground laboratory testing. A computational simulation of atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of assumed mechanistic behavior of atomic oxygen and results of both ground laboratory and LDEF data, a predictive Monte Carlo model was developed which simulates the oxidation processes that occur on polymers with applied protective coatings that have defects. The use of high atomic oxygen fluence-directed ram LDEF results has enabled mechanistic implications to be made by adjusting Monte Carlo modeling assumptions to match observed results based on scanning electron microscopy. Modeling assumptions, implications, and predictions are presented, along with comparison of observed ground laboratory and LDEF results.
Consistent post-reaction vibrational energy redistribution in DSMC simulations using TCE model
NASA Astrophysics Data System (ADS)
Borges Sebastião, Israel; Alexeenko, Alina
2016-10-01
The direct simulation Monte Carlo (DSMC) method has been widely applied to study shockwaves, hypersonic reentry flows, and other nonequilibrium flow phenomena. Although there is currently active research on high-fidelity models based on ab initio data, the total collision energy (TCE) and Larsen-Borgnakke (LB) models remain the most often used chemistry and relaxation models in DSMC simulations, respectively. The conventional implementation of the discrete LB model, however, may not satisfy detailed balance when recombination and exchange reactions play an important role in the flow energy balance. This issue can become even more critical in reacting mixtures involving polyatomic molecules, such as in combustion. In this work, this important shortcoming is addressed and an empirical approach to consistently specify the post-reaction vibrational states close to thermochemical equilibrium conditions is proposed within the TCE framework. Following Bird's quantum-kinetic (QK) methodology for populating post-reaction states, the new TCE-based approach involves two main steps. The state-specific TCE reaction probabilities for a forward reaction are first pre-computed from equilibrium 0-D simulations. These probabilities are then employed to populate the post-reaction vibrational states of the corresponding reverse reaction. The new approach is illustrated by application to exchange and recombination reactions relevant to H2-O2 combustion processes.
Tracking the visual focus of attention for a varying number of wandering people.
Smith, Kevin; Ba, Sileye O; Odobez, Jean-Marc; Gatica-Perez, Daniel
2008-07-01
We define and address the problem of finding the visual focus of attention for a varying number of wandering people (VFOA-W), determining where the people's movement is unconstrained. VFOA-W estimation is a new and important problem with mplications for behavior understanding and cognitive science, as well as real-world applications. One such application, which we present in this article, monitors the attention passers-by pay to an outdoor advertisement. Our approach to the VFOA-W problem proposes a multi-person tracking solution based on a dynamic Bayesian network that simultaneously infers the (variable) number of people in a scene, their body locations, their head locations, and their head pose. For efficient inference in the resulting large variable-dimensional state-space we propose a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling scheme, as well as a novel global observation model which determines the number of people in the scene and localizes them. We propose a Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM)-based VFOA-W model which use head pose and location information to determine people's focus state. Our models are evaluated for tracking performance and ability to recognize people looking at an outdoor advertisement, with results indicating good performance on sequences where a moderate number of people pass in front of an advertisement.
Mirrored continuum and molecular scale simulations of the ignition of gamma phase RDX
NASA Astrophysics Data System (ADS)
Stewart, D. Scott; Chaudhuri, Santanu; Joshi, Kaushik; Lee, Kiabek
2015-06-01
We consider the ignition of a high-pressure gamma-phase of an explosive crystal of RDX which forms during overdriven shock initiation. Molecular dynamics (MD), with first-principles based or reactive force field based molecular potentials, provides a description of the chemistry as an extremely complex reaction network. The results of the molecular simulation is analyzed by sorting molecular product fragments into high and low molecular groups, to represent identifiable components that can be interpreted by a continuum model. A continuum model based on a Gibbs formulation, that has a single temperature and stress state for the mixture is used to represent the same RDX material and its chemistry. Each component in the continuum model has a corresponding Gibbs continuum potential, that are in turn inferred from molecular MD informed equation of state libraries such as CHEETAH, or are directly simulated by Monte Carlo MD simulations. Information about transport, kinetic rates and diffusion are derived from the MD simulation and the growth of a reactive hot spot in the RDX is studied with both simulations that mirror the other results to provide an essential, continuum/atomistic link. Supported by N000014-12-1-0555, subaward-36561937 (ONR).
Modeling Optical Properties of Mineral Aerosol Particles by Using Nonsymmetric Hexahedra
NASA Technical Reports Server (NTRS)
Bi, Lei; Yang, Ping; Kattawar, George W.; Kahn, Ralph
2010-01-01
We explore the use of nonsymmetric geometries to simulate the single-scattering properties of airborne dust particles with complicated morphologies. Specifically, the shapes of irregular dust particles are assumed to be nonsymmetric hexahedra defined by using the Monte Carlo method. A combination of the discrete dipole approximation method and an improved geometric optics method is employed to compute the single-scattering properties of dust particles for size parameters ranging from 0.5 to 3000. The primary optical effect of eliminating the geometric symmetry of regular hexahedra is to smooth the scattering features in the phase function and to decrease the backscatter. The optical properties of the nonsymmetric hexahedra are used to mimic the laboratory measurements. It is demonstrated that a relatively close agreement can be achieved by using only one shape of nonsymmetric hexahedra. The agreement between the theoretical results and their measurement counterparts can be further improved by using a mixture of nonsymmetric hexahedra. It is also shown that the hexahedron model is much more appropriate than the "equivalent sphere" model for simulating the optical properties of dust particles, particularly, in the case of the elements of the phase matrix that associated with the polarization state of scattered light.
Duggan, Dennis M
2004-12-01
Improved cross-sections in a new version of the Monte-Carlo N-particle (MCNP) code may eliminate discrepancies between radial dose functions (as defined by American Association of Physicists in Medicine Task Group 43) derived from Monte-Carlo simulations of low-energy photon-emitting brachytherapy sources and those from measurements on the same sources with thermoluminescent dosimeters. This is demonstrated for two 125I brachytherapy seed models, the Implant Sciences Model ISC3500 (I-Plant) and the Amersham Health Model 6711, by simulating their radial dose functions with two versions of MCNP, 4c2 and 5.
SABRINA: an interactive solid geometry modeling program for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.
SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alcaraz, Olga; Trullàs, Joaquim, E-mail: quim.trullas@upc.edu; Tahara, Shuta
2016-09-07
The results of the structural properties of molten copper chloride are reported from high-energy X-ray diffraction measurements, reverse Monte Carlo modeling method, and molecular dynamics simulations using a polarizable ion model. The simulated X-ray structure factor reproduces all trends observed experimentally, in particular the shoulder at around 1 Å{sup −1} related to intermediate range ordering, as well as the partial copper-copper correlations from the reverse Monte Carlo modeling, which cannot be reproduced by using a simple rigid ion model. It is shown that the shoulder comes from intermediate range copper-copper correlations caused by the polarized chlorides.
Simulation-Based Model Checking for Nondeterministic Systems and Rare Events
2016-03-24
year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to
Monte Carlo simulation of aorta autofluorescence
NASA Astrophysics Data System (ADS)
Kuznetsova, A. A.; Pushkareva, A. E.
2016-08-01
Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.
Poisson Mixture Regression Models for Heart Disease Prediction.
Mufudza, Chipo; Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Synthetic NPA diagnostic for energetic particles in JET plasmas
NASA Astrophysics Data System (ADS)
Varje, J.; Sirén, P.; Weisen, H.; Kurki-Suonio, T.; Äkäslompolo, S.; contributors, JET
2017-11-01
Neutral particle analysis (NPA) is one of the few methods for diagnosing fast ions inside a plasma by measuring neutral atom fluxes emitted due to charge exchange reactions. The JET tokamak features an NPA diagnostic which measures neutral atom fluxes and energy spectra simultaneously for hydrogen, deuterium and tritium species. A synthetic NPA diagnostic has been developed and used to interpret these measurements to diagnose energetic particles in JET plasmas with neutral beam injection (NBI) heating. The synthetic NPA diagnostic performs a Monte Carlo calculation of the neutral atom fluxes in a realistic geometry. The 4D fast ion distributions, representing NBI ions, were simulated using the Monte Carlo orbit-following code ASCOT. Neutral atom density profiles were calculated using the FRANTIC neutral code in the JINTRAC modelling suite. Additionally, for rapid analysis, a scan of neutral profiles was precalculated with FRANTIC for a range of typical plasma parameters. These were taken from the JETPEAK database, which includes a comprehensive set of data from the flat-top phases of nearly all discharges in recent JET campaigns. The synthetic diagnostic was applied to various JET plasmas in the recent hydrogen campaign where different hydrogen/deuterium mixtures and NBI configurations were used. The simulated neutral fluxes from the fast ion distributions were found to agree with the measured fluxes, reproducing the slowing-down profiles for different beam isotopes and energies and quantitatively estimating the fraction of hydrogen and deuterium fast ions.
Exploring cluster Monte Carlo updates with Boltzmann machines
NASA Astrophysics Data System (ADS)
Wang, Lei
2017-11-01
Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.
To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...
Monte Carlo simulation models of breeding-population advancement.
J.N. King; G.R. Johnson
1993-01-01
Five generations of population improvement were modeled using Monte Carlo simulations. The model was designed to address questions that are important to the development of an advanced generation breeding population. Specifically we addressed the effects on both gain and effective population size of different mating schemes when creating a recombinant population for...
Geant4 hadronic physics for space radiation environment.
Ivantchenko, Anton V; Ivanchenko, Vladimir N; Molina, Jose-Manuel Quesada; Incerti, Sebastien L
2012-01-01
To test and to develop Geant4 (Geometry And Tracking version 4) Monte Carlo hadronic models with focus on applications in a space radiation environment. The Monte Carlo simulations have been performed using the Geant4 toolkit. Binary (BIC), its extension for incident light ions (BIC-ion) and Bertini (BERT) cascades were used as main Monte Carlo generators. For comparisons purposes, some other models were tested too. The hadronic testing suite has been used as a primary tool for model development and validation against experimental data. The Geant4 pre-compound (PRECO) and de-excitation (DEE) models were revised and improved. Proton, neutron, pion, and ion nuclear interactions were simulated with the recent version of Geant4 9.4 and were compared with experimental data from thin and thick target experiments. The Geant4 toolkit offers a large set of models allowing effective simulation of interactions of particles with matter. We have tested different Monte Carlo generators with our hadronic testing suite and accordingly we can propose an optimal configuration of Geant4 models for the simulation of the space radiation environment.
ERIC Educational Resources Information Center
Kwok, Oi-man; West, Stephen G.; Green, Samuel B.
2007-01-01
This Monte Carlo study examined the impact of misspecifying the [big sum] matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the [big sum] matrix usually resulted in overestimation of the variances of the random…
Nagai, Takashi; De Schamphelaere, Karel A C
2016-11-01
The authors investigated the effect of binary mixtures of zinc (Zn), copper (Cu), cadmium (Cd), and nickel (Ni) on the growth of a freshwater diatom, Navicula pelliculosa. A 7 × 7 full factorial experimental design (49 combinations in total) was used to test each binary metal mixture. A 3-d fluorescence microplate toxicity assay was used to test each combination. Mixture effects were predicted by concentration addition and independent action models based on a single-metal concentration-response relationship between the relative growth rate and the calculated free metal ion activity. Although the concentration addition model predicted the observed mixture toxicity significantly better than the independent action model for the Zn-Cu mixture, the independent action model predicted the observed mixture toxicity significantly better than the concentration addition model for the Cd-Zn, Cd-Ni, and Cd-Cu mixtures. For the Zn-Ni and Cu-Ni mixtures, it was unclear which of the 2 models was better. Statistical analysis concerning antagonistic/synergistic interactions showed that the concentration addition model is generally conservative (with the Zn-Ni mixture being the sole exception), indicating that the concentration addition model would be useful as a method for a conservative first-tier screening-level risk analysis of metal mixtures. Environ Toxicol Chem 2016;35:2765-2773. © 2016 SETAC. © 2016 SETAC.
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
The GlueX central drift chamber: Design and performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Haarlem, Y; Barbosa, F; Dey, B
2010-10-01
Tests and studies concerning the design and performance of the GlueX Central Drift Chamber (CDC) are presented. A full-scale prototype was built to test and steer the mechanical and electronic design. Small scale prototypes were constructed to test for sagging and to do timing and resolution studies of the detector. These studies were used to choose the gas mixture and to program a Monte Carlo simulation that can predict the detector response in an external magnetic field. Particle identification and charge division possibilities were also investigated.
Mixture Rasch Models with Joint Maximum Likelihood Estimation
ERIC Educational Resources Information Center
Willse, John T.
2011-01-01
This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procassini, R.J.
1997-12-31
The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution ofmore » particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.« less
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
Pedersen, Jan Skov; Oliveira, Cristiano L.P.; Hübschmann, Henriette Baun; Arleth, Lise; Manniche, Søren; Kirkby, Nicolai; Nielsen, Hanne Mørck
2012-01-01
Immune stimulating complex (ISCOM) particles consisting of a mixture of Quil-A, cholesterol, and phospholipids were structurally characterized by small-angle x-ray scattering (SAXS). The ISCOM particles are perforated vesicles of very well-defined structures. We developed and implemented a novel (to our knowledge) modeling method based on Monte Carlo simulation integrations to describe the SAXS data. This approach is similar to the traditional modeling of SAXS data, in which a structure is assumed, the scattering intensity is calculated, and structural parameters are optimized by weighted least-squares methods when the model scattering intensity is fitted to the experimental data. SAXS data from plain ISCOM matrix particles in aqueous suspension, as well as those from complete ISCOMs (i.e., with an antigen (tetanus toxoid) incorporated) can be modeled as a polydisperse distribution of perforated bilayer vesicles with icosahedral, football, or tennis ball structures. The dominating structure is the tennis ball structure, with an outer diameter of 40 nm and with 20 holes 5–6 nm in diameter. The lipid bilayer membrane is 4.6 nm thick, with a low-electron-density, 2.0-nm-thick hydrocarbon core. Surprisingly, in the ISCOMs, the tetanus toxoid is located just below the membrane inside the particles. PMID:22677391
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Chao-Jun; Li, Xin-Zhou, E-mail: fengcj@shnu.edu.cn, E-mail: kychz@shnu.edu.cn
To probe the late evolution history of the universe, we adopt two kinds of optimal basis systems. One of them is constructed by performing the principle component analysis, and the other is built by taking the multidimensional scaling approach. Cosmological observables such as the luminosity distance can be decomposed into these basis systems. These basis systems are optimized for different kinds of cosmological models that are based on different physical assumptions, even for a mixture model of them. Therefore, the so-called feature space that is projected from the basis systems is cosmological model independent, and it provides a parameterization for studying and reconstructing themore » Hubble expansion rate from the supernova luminosity distance and even gamma-ray burst (GRB) data with self-calibration. The circular problem when using GRBs as cosmological candles is naturally eliminated in this procedure. By using the Levenberg–Marquardt technique and the Markov Chain Monte Carlo method, we perform an observational constraint on this kind of parameterization. The data we used include the “joint light-curve analysis” data set that consists of 740 Type Ia supernovae and 109 long GRBs with the well-known Amati relation.« less
Argonne Bubble Experiment Thermal Model Development II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buechler, Cynthia Eileen
2016-07-01
This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations.more » The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.« less
Modeling of synchrotron-based laboratory simulations of Titan's ionospheric photochemistry
NASA Astrophysics Data System (ADS)
Carrasco, Nathalie; Peng, Zhe; Pernot, Pascal
2014-11-01
The APSIS reactor has been designed to simulate in the laboratory with a VUV synchrotron irradiation the photochemistry occurring in planetary upper atmospheres. A N2-CH4 Titan-like gas mixture has been studied, whose photochemistry in Titan's ionospheric irradiation conditions leads to a coupled chemical network involving both radicals and ions. In the present work, an ion-neutral coupled model is developed to interpret the experimental data, taking into account the uncertainties on the kinetic parameters by Monte Carlo sampling. The model predicts species concentrations in agreement with mass spectrometry measurements of the methane consumption and product blocks intensities. Ion chemistry and in particular dissociative recombination are found to be very important through sensitivity analysis. The model is also applied to complementary environmental conditions, corresponding to Titan's ionospheric average conditions and to another existing synchrotron setup. An innovative study of the correlations between species concentrations identifies two main competitive families, leading respectively to saturated and unsaturated species. We find that the unsaturated growth family, driven by C2H2 , is dominant in Titan's upper atmosphere, as observed by the Cassini INMS. But the saturated species are substantially more intense in the measurements of the two synchrotron experimental setups, and likely originate from catalysis by metallic walls of the reactors.
Cuetos, Alejandro; Patti, Alessandro
2015-08-01
We propose a simple but powerful theoretical framework to quantitatively compare Brownian dynamics (BD) and dynamic Monte Carlo (DMC) simulations of multicomponent colloidal suspensions. By extending our previous study focusing on monodisperse systems of rodlike colloids, here we generalize the formalism described there to multicomponent colloidal mixtures and validate it by investigating the dynamics in isotropic and liquid crystalline phases containing spherical and rodlike particles. In order to investigate the dynamics of multicomponent colloidal systems by DMC simulations, it is key to determine the elementary time step of each species and establish a unique timescale. This is crucial to consistently study the dynamics of colloidal particles with different geometry. By analyzing the mean-square displacement, the orientation autocorrelation functions, and the self part of the van Hove correlation functions, we show that DMC simulation is a very convenient and reliable technique to describe the stochastic dynamics of any multicomponent colloidal system. Our theoretical formalism can be easily extended to any colloidal system containing size and/or shape polydisperse particles.
Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah; ...
2017-06-01
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molpeceres, Germán; Ortigoso, Juan; Escribano, Rafael
2016-07-10
We present a spectroscopic study of methane–ethane ice mixtures. We have grown CH{sub 4}:C{sub 2}H{sub 6} mixtures with ratios 3:1, 1:1, and 1:3 at 18 and 30 K, plus pure methane and ethane ices, and have studied them in the near-infrared (NIR) and mid-infrared (MIR) ranges. We have determined densities of all species mentioned above. For amorphous ethane grown at 18 and 30 K we have obtained a density of 0.41 and 0.54 g cm{sup −3}, respectively, lower than a previous measurement of the density of the crystalline species, 0.719 g cm{sup −3}. As far as we know this ismore » the first determination of the density of amorphous ethane ice. We have measured band shifts of the main NIR methane and ethane features in the mixtures with respect to the corresponding values in the pure ices. We have estimated band strengths of these bands in the NIR and MIR ranges. In general, intensity decay in methane modes was detected in the mixtures, whereas for ethane no clear tendency was observed. Optical constants of the mixtures at 30 and 18 K have also been evaluated. These values can be used to trace the presence of these species in the surface of trans-Neptunian objects. Furthermore, we have carried out a theoretical calculation of these ice mixtures. Simulation cells for the amorphous solids have been constructed using a Metropolis Monte Carlo procedure. Relaxation of the cells and prediction of infrared spectra have been carried out at density functional theory level.« less
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Rodrigues, Josemar; Cancho, Vicente G; de Castro, Mário; Balakrishnan, N
2012-12-01
In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis--latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de São Carlos, São Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.
Chonggang Xu; Hong S. He; Yuanman Hu; Yu Chang; Xiuzhen Li; Rencang Bu
2005-01-01
Geostatistical stochastic simulation is always combined with Monte Carlo method to quantify the uncertainty in spatial model simulations. However, due to the relatively long running time of spatially explicit forest models as a result of their complexity, it is always infeasible to generate hundreds or thousands of Monte Carlo simulations. Thus, it is of great...
Groundwars Version 5.0. User’s Guide
1992-08-01
model, Monte Carlo, land duel , heterogeneous forces, TANKWARS, target acquisition, combat survivability 19. ABSTRACT (Continue on reverse if necessary...land duel between two heterogeneous forces. The model simuJ.ates individual weapon systems and employs Monte Carlo probability theory as its primary...is a weapon systems effectiveness model which provides the results of a land duel between two forces. The model simulates individual weapon systems
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
A white-box model of S-shaped and double S-shaped single-species population growth
Kalmykov, Lev V.
2015-01-01
Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka–Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717
Modulated phases in a three-dimensional Maier-Saupe model with competing interactions
NASA Astrophysics Data System (ADS)
Bienzobaz, P. F.; Xu, Na; Sandvik, Anders W.
2017-07-01
This work is dedicated to the study of the discrete version of the Maier-Saupe model in the presence of competing interactions. The competition between interactions favoring different orientational ordering produces a rich phase diagram including modulated phases. Using a mean-field approach and Monte Carlo simulations, we show that the proposed model exhibits isotropic and nematic phases and also a series of modulated phases that meet at a multicritical point, a Lifshitz point. Though the Monte Carlo and mean-field phase diagrams show some quantitative disagreements, the Monte Carlo simulations corroborate the general behavior found within the mean-field approximation.
Rapid Monte Carlo Simulation of Gravitational Wave Galaxies
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2015-01-01
With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Depletion forces drive polymer-like self-assembly in vibrofluidized granular materials†
Nossal, Ralph
2011-01-01
Ranging from nano- to granular-scales, control of particle assembly can be achieved by limiting the available free space, for example by increasing the concentration of particles (“crowding”) or through their restriction to 2D environments. It is unclear, however, if self-assembly principles governing thermally-equilibrated molecules can also apply to mechanically-excited macroscopic particles in non-equilibrium steady-state. Here we show that low densities of vibrofluidized steel rods, when crowded by high densities of spheres and confined to quasi-2D planes, can self-assemble into linear polymer-like structures. Our 2D Monte Carlo simulations show similar finite sized aggregates in thermally equilibrated binary mixtures. Using theory and simulations, we demonstrate how depletion interactions create oriented “binding” forces between rigid rods to form these “living polymers.” Unlike rod-sphere mixtures in 3D that can demonstrate well-defined equilibrium phases, our mixtures confined to 2D lack these transitions because lower dimensionality favors the formation of linear aggregates, thus suppressing a true phase transition. The qualitative and quantitative agreement between equilibrium and granular patterning for these mixtures suggests that entropy maximization is the determining driving force for bundling. Furthermore, this study uncovers a previously unknown patterning behavior at both the granular and nanoscales, and may provide insights into the role of crowding at interfaces in molecular assembly. PMID:22039392
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy
NASA Astrophysics Data System (ADS)
Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas
2012-04-01
Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.
ecode - Electron Transport Algorithm Testing v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene
2016-10-05
ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less
Bayesian peak picking for NMR spectra.
Cheng, Yichen; Gao, Xin; Liang, Faming
2014-02-01
Protein structure determination is a very important topic in structural genomics, which helps people to understand varieties of biological functions such as protein-protein interactions, protein-DNA interactions and so on. Nowadays, nuclear magnetic resonance (NMR) has often been used to determine the three-dimensional structures of protein in vivo. This study aims to automate the peak picking step, the most important and tricky step in NMR structure determination. We propose to model the NMR spectrum by a mixture of bivariate Gaussian densities and use the stochastic approximation Monte Carlo algorithm as the computational tool to solve the problem. Under the Bayesian framework, the peak picking problem is casted as a variable selection problem. The proposed method can automatically distinguish true peaks from false ones without preprocessing the data. To the best of our knowledge, this is the first effort in the literature that tackles the peak picking problem for NMR spectrum data using Bayesian method. Copyright © 2013. Production and hosting by Elsevier Ltd.
Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.
Corcoran, Timothy C
2018-03-01
In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.
NASA Astrophysics Data System (ADS)
Lee, G. K. H.; Wood, K.; Dobbs-Dixon, I.; Rice, A.; Helling, Ch.
2017-05-01
Context. As the 3D spatial properties of exoplanet atmospheres are being observed in increasing detail by current and new generations of telescopes, the modelling of the 3D scattering effects of cloud forming atmospheres with inhomogeneous opacity structures becomes increasingly important to interpret observational data. Aims: We model the scattering and emission properties of a simulated cloud forming, inhomogeneous opacity, hot Jupiter atmosphere of HD 189733b. We compare our results to available Hubble Space Telescope (HST) and Spitzer data and quantify the effects of 3D multiple scattering on observable properties of the atmosphere. We discuss potential observational properties of HD 189733b for the upcoming Transiting Exoplanet Survey Satellite (TESS) and CHaracterising ExOPlanet Satellite (CHEOPS) missions. Methods: We developed a Monte Carlo radiative transfer code and applied it to post-process output of our 3D radiative-hydrodynamic, cloud formation simulation of HD 189733b. We employed three variance reduction techniques, I.e. next event estimation, survival biasing, and composite emission biasing, to improve signal to noise of the output. For cloud particle scattering events, we constructed a log-normal area distribution from the 3D cloud formation radiative-hydrodynamic results, which is stochastically sampled in order to model the Rayleigh and Mie scattering behaviour of a mixture of grain sizes. Results: Stellar photon packets incident on the eastern dayside hemisphere show predominantly Rayleigh, single-scattering behaviour, while multiple scattering occurs on the western hemisphere. Combined scattered and thermal emitted light predictions are consistent with published HST and Spitzer secondary transit observations. Our model predictions are also consistent with geometric albedo constraints from optical wavelength ground-based polarimetry and HST B band measurements. We predict an apparent geometric albedo for HD 189733b of 0.205 and 0.229, in the TESS and CHEOPS photometric bands respectively. Conclusions: Modelling the 3D geometric scattering effects of clouds on observables of exoplanet atmospheres provides an important contribution to the attempt to determine the cloud properties of these objects. Comparisons between TESS and CHEOPS photometry may provide qualitative information on the cloud properties of nearby hot Jupiter exoplanets.
Validation of the Monte Carlo simulator GATE for indium-111 imaging.
Assié, K; Gardin, I; Véra, P; Buvat, I
2005-07-07
Monte Carlo simulations are useful for optimizing and assessing single photon emission computed tomography (SPECT) protocols, especially when aiming at measuring quantitative parameters from SPECT images. Before Monte Carlo simulated data can be trusted, the simulation model must be validated. The purpose of this work was to validate the use of GATE, a new Monte Carlo simulation platform based on GEANT4, for modelling indium-111 SPECT data, the quantification of which is of foremost importance for dosimetric studies. To that end, acquisitions of (111)In line sources in air and in water and of a cylindrical phantom were performed, together with the corresponding simulations. The simulation model included Monte Carlo modelling of the camera collimator and of a back-compartment accounting for photomultiplier tubes and associated electronics. Energy spectra, spatial resolution, sensitivity values, images and count profiles obtained for experimental and simulated data were compared. An excellent agreement was found between experimental and simulated energy spectra. For source-to-collimator distances varying from 0 to 20 cm, simulated and experimental spatial resolution differed by less than 2% in air, while the simulated sensitivity values were within 4% of the experimental values. The simulation of the cylindrical phantom closely reproduced the experimental data. These results suggest that GATE enables accurate simulation of (111)In SPECT acquisitions.
Gao, Yongfei; Feng, Jianfeng; Kang, Lili; Xu, Xin; Zhu, Lin
2018-01-01
The joint toxicity of chemical mixtures has emerged as a popular topic, particularly on the additive and potential synergistic actions of environmental mixtures. We investigated the 24h toxicity of Cu-Zn, Cu-Cd, and Cu-Pb and 96h toxicity of Cd-Pb binary mixtures on the survival of zebrafish larvae. Joint toxicity was predicted and compared using the concentration addition (CA) and independent action (IA) models with different assumptions in the toxic action mode in toxicodynamic processes through single and binary metal mixture tests. Results showed that the CA and IA models presented varying predictive abilities for different metal combinations. For the Cu-Cd and Cd-Pb mixtures, the CA model simulated the observed survival rates better than the IA model. By contrast, the IA model simulated the observed survival rates better than the CA model for the Cu-Zn and Cu-Pb mixtures. These findings revealed that the toxic action mode may depend on the combinations and concentrations of tested metal mixtures. Statistical analysis of the antagonistic or synergistic interactions indicated that synergistic interactions were observed for the Cu-Cd and Cu-Pb mixtures, non-interactions were observed for the Cd-Pb mixtures, and slight antagonistic interactions for the Cu-Zn mixtures. These results illustrated that the CA and IA models are consistent in specifying the interaction patterns of binary metal mixtures. Copyright © 2017 Elsevier B.V. All rights reserved.
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
Verification and Validation of Monte Carlo N-Particle 6 for Computing Gamma Protection Factors
2015-03-26
methods for evaluating RPFs, which it used for the subsequent 30 years. These approaches included computational modeling, radioisotopes , and a high...1.2.1. Past Methods of Experimental Evaluation ........................................................ 2 1.2.2. Modeling Efforts...Other Considerations ......................................................................................... 14 2.4. Monte Carlo Methods
A Workstation Farm Optimized for Monte Carlo Shell Model Calculations : Alphleet
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Shimizu, N.; Haruyama, S.; Honma, M.; Mizusaki, T.; Taketani, A.; Utsuno, Y.; Otsuka, T.
We have built a workstation farm named ``Alphleet" which consists of 140 COMPAQ's Alpha 21264 CPUs, for Monte Carlo Shell Model (MCSM) calculations. It has achieved more than 90 % scalable performance with 140 CPUs when the MCSM calculation with PVM and 61.2 Gflops of LINPACK.
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie
2013-01-01
Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be accounted for by single chemicals. PMID:23990906
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
MUSiC—An Automated Scan for Deviations between Data and Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Meyer, Arnd
2010-02-01
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Arnd
2010-02-10
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
Monte Carlo modeling of spatial coherence: free-space diffraction
Fischer, David G.; Prahl, Scott A.; Duncan, Donald D.
2008-01-01
We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions. PMID:18830335
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
ERIC Educational Resources Information Center
Henson, James M.; Reise, Steven P.; Kim, Kevin H.
2007-01-01
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, J; Pelletier, C; Lee, C
Purpose: Organ doses for the Hodgkin’s lymphoma patients treated with cobalt-60 radiation were estimated using an anthropomorphic model and Monte Carlo modeling. Methods: A cobalt-60 treatment unit modeled in the BEAMnrc Monte Carlo code was used to produce phase space data. The Monte Carlo simulation was verified with percent depth dose measurement in water at various field sizes. Radiation transport through the lung blocks were modeled by adjusting the weights of phase space data. We imported a precontoured adult female hybrid model and generated a treatment plan. The adjusted phase space data and the human model were imported to themore » XVMC Monte Carlo code for dose calculation. The organ mean doses were estimated and dose volume histograms were plotted. Results: The percent depth dose agreement between measurement and calculation in water phantom was within 2% for all field sizes. The mean organ doses of heart, left breast, right breast, and spleen for the selected case were 44.3, 24.1, 14.6 and 3.4 Gy, respectively with the midline prescription dose of 40.0 Gy. Conclusion: Organ doses were estimated for the patient group whose threedimensional images are not available. This development may open the door to more accurate dose reconstruction and estimates of uncertainties in secondary cancer risk for Hodgkin’s lymphoma patients. This work was partially supported by the intramural research program of the National Institutes of Health, National Cancer Institute, Division of Cancer Epidemiology and Genetics.« less
Maximum likelihood estimation of finite mixture model for economic data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Mosaicing of airborne LiDAR bathymetry strips based on Monte Carlo matching
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Su, Dianpeng; Zhang, Kai; Ma, Yue; Wang, Mingwei; Yang, Anxiu
2017-09-01
This study proposes a new methodology for mosaicing airborne light detection and ranging (LiDAR) bathymetry (ALB) data based on Monte Carlo matching. Various errors occur in ALB data due to imperfect system integration and other interference factors. To account for these errors, a Monte Carlo matching algorithm based on a nonlinear least-squares adjustment model is proposed. First, the raw data of strip overlap areas were filtered according to their relative drift of depths. Second, a Monte Carlo model and nonlinear least-squares adjustment model were combined to obtain seven transformation parameters. Then, the multibeam bathymetric data were used to correct the initial strip during strip mosaicing. Finally, to evaluate the proposed method, the experimental results were compared with the results of the Iterative Closest Points (ICP) and three-dimensional Normal Distributions Transform (3D-NDT) algorithms. The results demonstrate that the algorithm proposed in this study is more robust and effective. When the quality of the raw data is poor, the Monte Carlo matching algorithm can still achieve centimeter-level accuracy for overlapping areas, which meets the accuracy of bathymetry required by IHO Standards for Hydrographic Surveys Special Publication No.44.
Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave
NASA Astrophysics Data System (ADS)
Yasuda, Shugo
2017-02-01
A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.
Nonideal mixing of phosphatidylserine and phosphatidylcholine in the fluid lamellar phase.
Huang, J; Swanson, J E; Dibble, A R; Hinderliter, A K; Feigenson, G W
1993-01-01
The mixing of phosphatidylserine (PS) and phosphatidylcholine (PC) in fluid bilayer model membranes was studied by measuring binding of aqueous Ca2+ ions. The measured [Ca2+]aq was used to derive the activity coefficient for PS, gamma PS, in the lipid mixture. For (16:0, 18:1) PS in binary mixtures with either (16:0, 18:1)PC, (14:1, 14:1)PC, or (18:1, 18:1)PC, gamma PS > 1; i.e., mixing is nonideal, with PS and PC clustered rather than randomly distributed, despite the electrostatic repulsion between PS headgroups. To understand better this mixing behavior, Monte Carlo simulations of the PS/PC distributions were performed, using Kawasaki relaxation. The excess energy was divided into an electrostatic term Uel and one adjustable term including all other nonideal energy contributions, delta Em. Uel was calculated using a discrete charge theory. Kirkwood's coupling parameter method was used to calculate the excess free energy of mixing, delta GEmix, hence In gamma PS,calc. The values of In gamma PS,calc were equalized by adjusting delta Em in order to find the simulated PS/PC distribution that corresponded to the experimental results. We were thus able to compare the smeared charge calculation of [Ca2+]surf with a calculation ("masked evaluation method") that recognized clustering of the negatively charged PS: clustering was found to have a modest effect on [Ca2+]surf, relative to the smeared charge model. Even though both PS and PC tend to cluster, the long-range nature of the electrostatic repulsion reduces the extent of PS clustering at low PS mole fraction compared to PC clustering at an equivalent low PC mole fraction. PMID:8457667
Nonideal mixing of phosphatidylserine and phosphatidylcholine in the fluid lamellar phase.
Huang, J; Swanson, J E; Dibble, A R; Hinderliter, A K; Feigenson, G W
1993-02-01
The mixing of phosphatidylserine (PS) and phosphatidylcholine (PC) in fluid bilayer model membranes was studied by measuring binding of aqueous Ca2+ ions. The measured [Ca2+]aq was used to derive the activity coefficient for PS, gamma PS, in the lipid mixture. For (16:0, 18:1) PS in binary mixtures with either (16:0, 18:1)PC, (14:1, 14:1)PC, or (18:1, 18:1)PC, gamma PS > 1; i.e., mixing is nonideal, with PS and PC clustered rather than randomly distributed, despite the electrostatic repulsion between PS headgroups. To understand better this mixing behavior, Monte Carlo simulations of the PS/PC distributions were performed, using Kawasaki relaxation. The excess energy was divided into an electrostatic term Uel and one adjustable term including all other nonideal energy contributions, delta Em. Uel was calculated using a discrete charge theory. Kirkwood's coupling parameter method was used to calculate the excess free energy of mixing, delta GEmix, hence In gamma PS,calc. The values of In gamma PS,calc were equalized by adjusting delta Em in order to find the simulated PS/PC distribution that corresponded to the experimental results. We were thus able to compare the smeared charge calculation of [Ca2+]surf with a calculation ("masked evaluation method") that recognized clustering of the negatively charged PS: clustering was found to have a modest effect on [Ca2+]surf, relative to the smeared charge model. Even though both PS and PC tend to cluster, the long-range nature of the electrostatic repulsion reduces the extent of PS clustering at low PS mole fraction compared to PC clustering at an equivalent low PC mole fraction.
2D lattice model of a lipid bilayer: Microscopic derivation and thermodynamic exploration
NASA Astrophysics Data System (ADS)
Hakobyan, Davit; Heuer, Andreas
2017-02-01
Based on all-atom Molecular Dynamics (MD) simulations of a lipid bilayer we present a systematic mapping on a 2D lattice model. Keeping the lipid type and the chain order parameter as key variables we derive a free energy functional, containing the enthalpic interaction of adjacent lipids as well as the tail entropy. The functional form of both functions is explicitly determined for saturated and polyunsaturated lipids. By studying the lattice model via Monte Carlo simulations it is possible to reproduce the temperature dependence of the distribution of order parameters of the pure lipids, including the prediction of the gel transition. Furthermore, application to a mixture of saturated and polyunsaturated lipids yields the correct phase separation behavior at lower temperatures with a simulation time reduced by approximately 7 orders of magnitude as compared to the corresponding MD simulations. Even the time-dependence of the de-mixing is reproduced on a semi-quantitative level. Due to the generality of the approach we envisage a large number of further applications, ranging from modeling larger sets of lipids, sterols, and solvent proteins to predicting nucleation barriers for the melting of lipids. Particularly, from the properties of the 2D lattice model one can directly read off the enthalpy and entropy change of the 1,2-dipalmitoyl-sn-glycero-3-phosphocholine gel-to-liquid transition in excellent agreement with experimental and MD results.
Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K
2011-12-01
Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Predicting herbicide mixture effects on multiple algal species using mixture toxicity models.
Nagai, Takashi
2017-10-01
The validity of the application of mixture toxicity models, concentration addition and independent action, to a species sensitivity distribution (SSD) for calculation of a multisubstance potentially affected fraction was examined in laboratory experiments. Toxicity assays of herbicide mixtures using 5 species of periphytic algae were conducted. Two mixture experiments were designed: a mixture of 5 herbicides with similar modes of action and a mixture of 5 herbicides with dissimilar modes of action, corresponding to the assumptions of the concentration addition and independent action models, respectively. Experimentally obtained mixture effects on 5 algal species were converted to the fraction of affected (>50% effect on growth rate) species. The predictive ability of the concentration addition and independent action models with direct application to SSD depended on the mode of action of chemicals. That is, prediction was better for the concentration addition model than the independent action model for the mixture of herbicides with similar modes of action. In contrast, prediction was better for the independent action model than the concentration addition model for the mixture of herbicides with dissimilar modes of action. Thus, the concentration addition and independent action models could be applied to SSD in the same manner as for a single-species effect. The present study to validate the application of the concentration addition and independent action models to SSD supports the usefulness of the multisubstance potentially affected fraction as the index of ecological risk. Environ Toxicol Chem 2017;36:2624-2630. © 2017 SETAC. © 2017 SETAC.
Accurate Theoretical Predictions of the Properties of Energetic Materials
2008-09-18
decomposition, Monte Carlo, molecular dynamics, supercritical fluids, solvation and separation, quantum Monte Carlo, potential energy surfaces, RDX , TNAZ...labs, who are contributing to the theoretical efforts, providing data for testing of the models, or aiding in the transition of the methods, models...and results to DoD applications. The major goals of the project are: • Models that describe phase transitions and chemical reactions in
Prokhorov, Alexander; Prokhorova, Nina I
2012-11-20
We applied the bidirectional reflectance distribution function (BRDF) model consisting of diffuse, quasi-specular, and glossy components to the Monte Carlo modeling of spectral effective emissivities for nonisothermal cavities. A method for extension of a monochromatic three-component (3C) BRDF model to a continuous spectral range is proposed. The initial data for this method are the BRDFs measured in the plane of incidence at a single wavelength and several incidence angles and directional-hemispherical reflectance measured at one incidence angle within a finite spectral range. We proposed the Monte Carlo algorithm for calculation of spectral effective emissivities for nonisothermal cavities whose internal surface is described by the wavelength-dependent 3C BRDF model. The results obtained for a cylindroconical nonisothermal cavity are discussed and compared with results obtained using the conventional specular-diffuse model.
Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque
NASA Astrophysics Data System (ADS)
Klaus, Leonard; Eichstädt, Sascha
2018-04-01
For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.
The direct simulation of acoustics on Earth, Mars, and Titan.
Hanford, Amanda D; Long, Lyle N
2009-02-01
With the recent success of the Huygens lander on Titan, a moon of Saturn, there has been renewed interest in further exploring the acoustic environments of the other planets in the solar system. The direct simulation Monte Carlo (DSMC) method is used here for modeling sound propagation in the atmospheres of Earth, Mars, and Titan at a variety of altitudes above the surface. DSMC is a particle method that describes gas dynamics through direct physical modeling of particle motions and collisions. The validity of DSMC for the entire range of Knudsen numbers (Kn), where Kn is defined as the mean free path divided by the wavelength, allows for the exploration of sound propagation in planetary environments for all values of Kn. DSMC results at a variety of altitudes on Earth, Mars, and Titan including the details of nonlinearity, absorption, dispersion, and molecular relaxation in gas mixtures are given for a wide range of Kn showing agreement with various continuum theories at low Kn and deviation from continuum theory at high Kn. Despite large computation time and memory requirements, DSMC is the method best suited to study high altitude effects or where continuum theory is not valid.
Large eddy simulations of a bluff-body stabilized hydrogen-methane jet flame
NASA Astrophysics Data System (ADS)
Drozda, Tomasz; Pope, Stephen
2005-11-01
Large eddy simulation (LES) is conducted of the turbulent bluff-body stabilized hydrogen-methane flame as considered in the experiments of the Combustion Research Facility at the Sandia National Laboratories and of the Thermal Research Group at the University of Sydney [1]. Both, reacting and non-reacting flows are considered. The subgrid scale (SGS) closure in LES is based on the scalar filtered mass density function (SFMDF) methodology [2]. A flamelet model is used to relate the chemical composition to the mixture fraction. The modeled SFMDF transport equation is solved by a hybrid finite-difference (FD) / Monte Carlo (MC) scheme. The FD component of the hybrid solver is validated by comparisons of the experimentally available flow statistics with those predicted by LES. The results via this method capture important features of the flames as observed experimentally.[1] A. R. Masri, R. W. Dibble, and R. S. Barlow. The structure of turbulent nonpremixed flames revealed by Raman-Rayleigh-LIF measurements. Prog. Energy Combust. Sci., 22:307--362, 1996. [2] F. A. Jaberi, P. J. Colucci, S. James, P. Givi, and S. B. Pope. Filtered mass density function for large eddy simulation of turbulent reacting flows. J. Fluid Mech., 401:85--121, 1999.
Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark
2013-01-01
Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.
NASA Technical Reports Server (NTRS)
Tikidjian, Raffi; Mackey, Ryan
2008-01-01
The DSN Array Simulator (wherein 'DSN' signifies NASA's Deep Space Network) is an updated version of software previously denoted the DSN Receive Array Technology Assessment Simulation. This software (see figure) is used for computational modeling of a proposed DSN facility comprising user-defined arrays of antennas and transmitting and receiving equipment for microwave communication with spacecraft on interplanetary missions. The simulation includes variations in spacecraft tracked and communication demand changes for up to several decades of future operation. Such modeling is performed to estimate facility performance, evaluate requirements that govern facility design, and evaluate proposed improvements in hardware and/or software. The updated version of this software affords enhanced capability for characterizing facility performance against user-defined mission sets. The software includes a Monte Carlo simulation component that enables rapid generation of key mission-set metrics (e.g., numbers of links, data rates, and date volumes), and statistical distributions thereof as functions of time. The updated version also offers expanded capability for mixed-asset network modeling--for example, for running scenarios that involve user-definable mixtures of antennas having different diameters (in contradistinction to a fixed number of antennas having the same fixed diameter). The improved version also affords greater simulation fidelity, sufficient for validation by comparison with actual DSN operations and analytically predictable performance metrics.
Lateral Organization of Lipids in Multi-component Liposomes
NASA Astrophysics Data System (ADS)
Ramachandran, Sanoop; Laradji, Mohamed; Sunil Kumar, P. B.
2009-04-01
Inspite of the fluid nature and low elastic modulus, membranes play a crucial role in maintaining the structural integrity of the cell. Recent experiments have challenged the passive nature of the membrane as proposed by the classical fluid mosaic model. Experiments indicate that biomembranes of eukaryotic cells may be laterally organized into small nanoscopic domains, called rafts, which are rich in sphingomyelin and cholesterol. It is largely believed that this in-plane organization is essential for a variety of physiological functions such as signaling, recruitment of specific proteins and endocytosis. However, elucidation of the fundamental issues including the mechanisms leading to the formation of lipid rafts, their stability, and their size remain difficult. This has reiterated the importance of understanding the equilibrium phase behavior and the kinetics of fluid multicomponent lipid membranes before attempts are made to find the effects of more complex mechanisms that may be involved in the formation and stability of lipid rafts. Current increase in interest in the domain formation in multicomponent membranes also stems from the experiments demonstrating fluid-fluid coexistence in mixtures of lipids and cholesterol and the success of several computational models in predicting their behavior. Here we review time dependent Ginzburg Landau model, dynamical triangulation Monte Carlo, and dissipative particle dynamics which are some of the methods that are commonly employed.
NASA Technical Reports Server (NTRS)
Mikus, T.; Heywood, J. B.; Hicks, R. E.
1978-01-01
A modified Zeldovich kinetic scheme was used to predict nitric oxide formation in the burned gases. Nonuniformities in fuel-air ratio in the primary zone were accounted for by a distribution of fuel-air ratios. This was followed by one or more dilution zones in which a Monte Carlo calculation was employed to follow the mixing and dilution processes. Predictions of NOX emissions were compared with various available experimental data, and satisfactory agreement was achieved. In particular, the model is applied to the NASA swirl-can modular combustor. The operating characteristics of this combustor which can be inferred from the modeling predictions are described. Parametric studies are presented which examine the influence of the modeling parameters on the NOX emission level. A series of flow visualization experiments demonstrates the fuel droplet breakup and turbulent recirculation processes. A tracer experiment quantitatively follows the jets from the swirler as they move downstream and entrain surrounding gases. Techniques were developed for calculating both fuel-air ratio and degree of nonuniformity from measurements of CO2, CO, O2, and hydrocarbons. A burning experiment made use of these techniques to map out the flow field in terms of local equivalence ratio and mixture nonuniformity.
Peterson, S W; Polf, J; Bues, M; Ciangaru, G; Archambault, L; Beddar, S; Smith, A
2009-05-21
The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy.
92 Years of the Ising Model: A High Resolution Monte Carlo Study
NASA Astrophysics Data System (ADS)
Xu, Jiahao; Ferrenberg, Alan M.; Landau, David P.
2018-04-01
Using extensive Monte Carlo simulations that employ the Wolff cluster flipping and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising model with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, we obtained the critical inverse temperature K c = 0.221 654 626(5) and the critical exponent of the correlation length ν = 0.629 912(86) with precision that improves upon previous Monte Carlo estimates.
Monte Carlo Simulation of Nonlinear Radiation Induced Plasmas. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, B. S.
1972-01-01
A Monte Carlo simulation model for radiation induced plasmas with nonlinear properties due to recombination was, employing a piecewise linearized predict-correct iterative technique. Several important variance reduction techniques were developed and incorporated into the model, including an antithetic variates technique. This approach is especially efficient for plasma systems with inhomogeneous media, multidimensions, and irregular boundaries. The Monte Carlo code developed has been applied to the determination of the electron energy distribution function and related parameters for a noble gas plasma created by alpha-particle irradiation. The characteristics of the radiation induced plasma involved are given.
VARIAN CLINAC 6 MeV Photon Spectra Unfolding using a Monte Carlo Meshed Model
NASA Astrophysics Data System (ADS)
Morató, S.; Juste, B.; Miró, R.; Verdú, G.
2017-09-01
Energy spectrum is the best descriptive function to determine photon beam quality of a Medical Linear Accelerator (LinAc). The use of realistic photon spectra in Monte Carlo simulations has a great importance to obtain precise dose calculations in Radiotherapy Treatment Planning (RTP). Reconstruction of photon spectra emitted by medical accelerators from measured depth dose distributions in a water cube is an important tool for commissioning a Monte Carlo treatment planning system. Regarding this, the reconstruction problem is an inverse radiation transport function which is ill conditioned and its solution may become unstable due to small perturbations in the input data. This paper presents a more stable spectral reconstruction method which can be used to provide an independent confirmation of source models for a given machine without any prior knowledge of the spectral distribution. Monte Carlo models used in this work are built with unstructured meshes to simulate with realism the linear accelerator head geometry.
NASA Astrophysics Data System (ADS)
Juddoo, Mrinal; Masri, Assaad R.; Pope, Stephen B.
2011-12-01
This paper reports measured stability limits and PDF calculations of piloted, turbulent flames of compressed natural gas (CNG) partially-premixed with either pure oxygen, or with varying levels of O2/N2. Stability limits are presented for flames of CNG fuel premixed with up to 20% oxygen as well as CNG-O2-N2 fuel where the O2 content is varied from 8 to 22% by volume. Calculations are presented for (i) Sydney flame B [Masri et al. 1988] which uses pure CNG as well as flames B15 to B25 where the CNG is partially-premixed with 15-25% oxygen by volume, respectively and (ii) Sandia methane-air (1:3 by volume) flame E [Barlow et al. 2005] as well as new flames E15 and E25 that are partially-premixed with 'reconstituted air' where the O2 content in nitrogen is 15 and 25% by volume, respectively. The calculations solve a transported PDF of composition using a particle-based Monte Carlo method and employ the EMST mixing model as well as detailed chemical kinetics. The addition of oxygen to the fuel increases stability, shortens the flames, broadens the reaction zone, and shifts the stoichiometric mixture fraction towards the inner side of the jet. It is found that for pure CNG flames where the reaction zone is narrow (∼0.1 in mixture fraction space), the PDF calculations fail to reproduce the correct level of local extinction on approach to blow-off. A broadening in the reaction zone up to about 0.25 in mixture fraction space is needed for the PDF/EMST approach to be able to capture these finite-rate chemistry effects. It is also found that for the same level of partial premixing, increasing the O2/N2 ratio increases the maximum levels of CO and NO but shifts the peak to richer mixture fractions. Over the range of oxygenation investigated here, stability limits have shown to improve almost linearly with increasing oxygen levels in the fuel and with increasing the contribution of release rate from the pilot.
Khadilkar, Mihir R; Escobedo, Fernando A
2014-10-17
Sought-after ordered structures of mixtures of hard anisotropic nanoparticles can often be thermodynamically unfavorable due to the components' geometric incompatibility to densely pack into regular lattices. A simple compatibilization rule is identified wherein the particle sizes are chosen such that the order-disorder transition pressures of the pure components match (and the entropies of the ordered phases are similar). Using this rule with representative polyhedra from the truncated-cube family that form pure-component plastic crystals, Monte Carlo simulations show the formation of plastic-solid solutions for all compositions and for a wide range of volume fractions.
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirgayussa, I Gde Eka, E-mail: ekadirgayussa@gmail.com; Yani, Sitti; Haryanto, Freddy, E-mail: freddy@fi.itb.ac.id
2015-09-30
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDPmore » and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose difference in PDD and dose profiles were achieve using incident electron energy 6.4 MeV.« less
Expanding metal mixture toxicity models to natural stream and lake invertebrate communities
Balistrieri, Laurie S.; Mebane, Christopher A.; Schmidt, Travis S.; Keller, William (Bill)
2015-01-01
A modeling approach that was used to predict the toxicity of dissolved single and multiple metals to trout is extended to stream benthic macroinvertebrates, freshwater zooplankton, and Daphnia magna. The approach predicts the accumulation of toxicants (H, Al, Cd, Cu, Ni, Pb, and Zn) in organisms using 3 equilibrium accumulation models that define interactions between dissolved cations and biological receptors (biotic ligands). These models differ in the structure of the receptors and include a 2-site biotic ligand model, a bidentate biotic ligand or 2-pKa model, and a humic acid model. The predicted accumulation of toxicants is weighted using toxicant-specific coefficients and incorporated into a toxicity function called Tox, which is then related to observed mortality or invertebrate community richness using a logistic equation. All accumulation models provide reasonable fits to metal concentrations in tissue samples of stream invertebrates. Despite the good fits, distinct differences in the magnitude of toxicant accumulation and biotic ligand speciation exist among the models for a given solution composition. However, predicted biological responses are similar among the models because there are interdependencies among model parameters in the accumulation–Tox models. To illustrate potential applications of the approaches, the 3 accumulation–Tox models for natural stream invertebrates are used in Monte Carlo simulations to predict the probability of adverse impacts in catchments of differing geology in central Colorado (USA); to link geology, water chemistry, and biological response; and to demonstrate how this approach can be used to screen for potential risks associated with resource development.
Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC
ERIC Educational Resources Information Center
Depaoli, Sarah
2012-01-01
Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Wardlow, Nathan; Polin, Chris; Villagomez-Bernabe, Balder; Currell, Fred
2015-11-01
We present a simple model for a component of the radiolytic production of any chemical species due to electron emission from irradiated nanoparticles (NPs) in a liquid environment, provided the expression for the G value for product formation is known and is reasonably well characterized by a linear dependence on beam energy. This model takes nanoparticle size, composition, density and a number of other readily available parameters (such as X-ray and electron attenuation data) as inputs and therefore allows for the ready determination of this contribution. Several approximations are used, thus this model provides an upper limit to the yield of chemical species due to electron emission, rather than a distinct value, and this upper limit is compared with experimental results. After the general model is developed we provide details of its application to the generation of HO• through irradiation of gold nanoparticles (AuNPs), a potentially important process in nanoparticle-based enhancement of radiotherapy. This model has been constructed with the intention of making it accessible to other researchers who wish to estimate chemical yields through this process, and is shown to be applicable to NPs of single elements and mixtures. The model can be applied without the need to develop additional skills (such as using a Monte Carlo toolkit), providing a fast and straightforward method of estimating chemical yields. A simple framework for determining the HO• yield for different NP sizes at constant NP concentration and initial photon energy is also presented.
White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E
2018-05-01
Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.
SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T. III
SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.
Zaikin, Alexey; Míguez, Joaquín
2017-01-01
We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
Monte Carlo Simulation Using HyperCard and Lotus 1-2-3.
ERIC Educational Resources Information Center
Oulman, Charles S.; Lee, Motoko Y.
Monte Carlo simulation is a computer modeling procedure for mimicking observations on a random variable. A random number generator is used in generating the outcome for the events that are being modeled. The simulation can be used to obtain results that otherwise require extensive testing or complicated computations. This paper describes how Monte…
USDA-ARS?s Scientific Manuscript database
A model to simulate radiative transfer (RT) of sun-induced chlorophyll fluorescence (SIF) of three-dimensional (3-D) canopy, FluorWPS, was proposed and evaluated. The inclusion of fluorescence excitation was implemented with the ‘weight reduction’ and ‘photon spread’ concepts based on Monte Carlo ra...
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
ERIC Educational Resources Information Center
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
NASA Astrophysics Data System (ADS)
Brdar, S.; Seifert, A.
2018-01-01
We present a novel Monte-Carlo ice microphysics model, McSnow, to simulate the evolution of ice particles due to deposition, aggregation, riming, and sedimentation. The model is an application and extension of the super-droplet method of Shima et al. (2009) to the more complex problem of rimed ice particles and aggregates. For each individual super-particle, the ice mass, rime mass, rime volume, and the number of monomers are predicted establishing a four-dimensional particle-size distribution. The sensitivity of the model to various assumptions is discussed based on box model and one-dimensional simulations. We show that the Monte-Carlo method provides a feasible approach to tackle this high-dimensional problem. The largest uncertainty seems to be related to the treatment of the riming processes. This calls for additional field and laboratory measurements of partially rimed snowflakes.
Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth
2011-01-01
Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.
A competitive binding model predicts the response of mammalian olfactory receptors to mixtures
NASA Astrophysics Data System (ADS)
Singh, Vijay; Murphy, Nicolle; Mainland, Joel; Balasubramanian, Vijay
Most natural odors are complex mixtures of many odorants, but due to the large number of possible mixtures only a small fraction can be studied experimentally. To get a realistic understanding of the olfactory system we need methods to predict responses to complex mixtures from single odorant responses. Focusing on mammalian olfactory receptors (ORs in mouse and human), we propose a simple biophysical model for odor-receptor interactions where only one odor molecule can bind to a receptor at a time. The resulting competition for occupancy of the receptor accounts for the experimentally observed nonlinear mixture responses. We first fit a dose-response relationship to individual odor responses and then use those parameters in a competitive binding model to predict mixture responses. With no additional parameters, the model predicts responses of 15 (of 18 tested) receptors to within 10 - 30 % of the observed values, for mixtures with 2, 3 and 12 odorants chosen from a panel of 30. Extensions of our basic model with odorant interactions lead to additional nonlinearities observed in mixture response like suppression, cooperativity, and overshadowing. Our model provides a systematic framework for characterizing and parameterizing such mixing nonlinearities from mixture response data.
NASA Astrophysics Data System (ADS)
Orkoulas, Gerassimos; Panagiotopoulos, Athanassios Z.
1994-07-01
In this work, we investigate the liquid-vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T*c=0.053, ρ*c=0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
Monte Carlo Transport for Electron Thermal Transport
NASA Astrophysics Data System (ADS)
Chenhall, Jeffrey; Cao, Duc; Moses, Gregory
2015-11-01
The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
NASA Technical Reports Server (NTRS)
Holms, A. G.
1974-01-01
Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.
ERIC Educational Resources Information Center
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less
Three-dimensional Monte Carlo calculation of some nuclear parameters
NASA Astrophysics Data System (ADS)
Günay, Mehtap; Şeker, Gökmen
2017-09-01
In this study, a fusion-fission hybrid reactor system was designed by using 9Cr2WVTa Ferritic steel structural material and the molten salt-heavy metal mixtures 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2, as fluids. The fluids were used in the liquid first wall, blanket and shield zones of a fusion-fission hybrid reactor system. Beryllium (Be) zone with the width of 3 cm was used for the neutron multiplication between the liquid first wall and blanket. This study analyzes the nuclear parameters such as tritium breeding ratio (TBR), energy multiplication factor (M), heat deposition rate, fission reaction rate in liquid first wall, blanket and shield zones and investigates effects of reactor grade Pu content in the designed system on these nuclear parameters. Three-dimensional analyses were performed by using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
NASA Astrophysics Data System (ADS)
Günay, M.; Şarer, B.; Kasap, H.
2014-08-01
In the present investigation, a fusion-fission hybrid reactor system was designed by using 9Cr2WVTa ferritic steel structural material and 99-95 % Li20Sn80-1-5 % SFG-Pu, 99-95 % Li20Sn80-1-5 % SFG-PuF4, 99-95 % Li20Sn80-1-5 % SFG-PuO2 the molten salt-heavy metal mixtures, as fluids. The fluids were used in the liquid first wall, blanket and shield zones of a fusion-fission hybrid reactor system. Beryllium zone with the width of 3 cm was used for the neutron multiplicity between liquid first wall and blanket. The contributions of each isotope in fluids on the nuclear parameters of a fusion-fission hybrid reactor such as tritium breeding ratio, energy multiplication factor, heat deposition rate were computed in liquid first wall, blanket and shield zones. Three-dimensional analyses were performed by using Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.
NASA Astrophysics Data System (ADS)
Marashdeh, Mohammad W.; Al-Hamarneh, Ibrahim F.; Abdel Munem, Eid M.; Tajuddin, A. A.; Ariffin, Alawiah; Al-Omari, Saleh
Rhizophora spp. wood has the potential to serve as a solid water or tissue equivalent phantom for photon and electron beam dosimetry. In this study, the effective atomic number (Zeff) and effective electron density (Neff) of raw wood and binderless Rhizophora spp. particleboards in four different particle sizes were determined in the 10-60 keV energy region. The mass attenuation coefficients used in the calculations were obtained using the Monte Carlo N-Particle (MCNP5) simulation code. The MCNP5 calculations of the attenuation parameters for the Rhizophora spp. samples were plotted graphically against photon energy and discussed in terms of their relative differences compared with those of water and breast tissue. Moreover, the validity of the MCNP5 code was examined by comparing the calculated attenuation parameters with the theoretical values obtained by the XCOM program based on the mixture rule. The results indicated that the MCNP5 process can be followed to determine the attenuation of gamma rays with several photon energies in other materials.
NASA Astrophysics Data System (ADS)
Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.
2013-03-01
Time-domain near-infrared spectroscopy (TD-NIRS) offers the ability to measure the absolute baseline optical properties of a tissue. Specifically, for brain imaging, the robust assessment of cerebral blood volume and oxygenation based on measurement of cerebral hemoglobin concentrations is essential for reliable cross-sectional and longitudinal studies. In adult heads, these baseline measurements are complicated by the presence of thick extra-cerebral tissue (scalp, skull, CSF). A simple semi-infinite homogeneous model of the head has proven to have limited use because of the large errors it introduces in the recovered brain absorption. Analytical solutions for layered media have shown improved performance on Monte-Carlo simulated data and layered phantom experiments, but their validity on real adult head data has never been demonstrated. With the advance of fast Monte Carlo approaches based on GPU computation, numerical methods to solve the radiative transfer equation become viable alternatives to analytical solutions of the diffusion equation. Monte Carlo approaches provide the additional advantage to be adaptable to any geometry, in particular more realistic head models. The goals of the present study were twofold: (1) to implement a fast and flexible Monte Carlo-based fitting routine to retrieve the brain optical properties; (2) to characterize the performances of this fitting method on realistic adult head data. We generated time-resolved data at various locations over the head, and fitted them with different models of light propagation: the homogeneous analytical model, and Monte Carlo simulations for three head models: a two-layer slab, the true subject's anatomy, and that of a generic atlas head. We found that the homogeneous model introduced a median 20 to 25% error on the recovered brain absorption, with large variations over the range of true optical properties. The two-layer slab model only improved moderately the results over the homogeneous one. On the other hand, using a generic atlas head registered to the subject's head surface decreased the error by a factor of 2. When the information is available, using the true subject anatomy offers the best performance.
NASA Astrophysics Data System (ADS)
Fomin, P. A.
2018-03-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one hydrocarbon fuel CnHm (for example, methane, propane, cyclohexane etc.) and (ii) multi-fuel gaseous mixtures (∑aiCniHmi) (for example, mixture of methane and propane, synthesis gas, benzene and kerosene) are presented for the first time. The models can be used for any stoichiometry, including fuel/fuels-rich mixtures, when reaction products contain molecules of carbon. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier's principle. Constants of the models have a clear physical meaning. The models can be used for calculation thermodynamic parameters of the mixture in a state of chemical equilibrium.
ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.
2014-01-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156
NASA Astrophysics Data System (ADS)
Klouch, Nawel; Riane, Houaria; Hamdache, Fatima; Addi, Djamel
2013-05-01
We are interested in modeling the interaction between light and biological tissue from the Monte Carlo method which is an approach used to solve modeling problems in different physical domains. Through the Monte Carlo approach we are going to try to interpret the spectral response absorption, reflectance, transmittance of normal human tissue under its three dominant tints in the visible range (350-700) nm. Then we will focus on the spectral response of the human tissue with varicosities in order to determinate the optimal conditions of operating the semiconductor laser for esthetic aim.
NASA Astrophysics Data System (ADS)
Sokolovskiy, Vladimir V.; Buchelnikov, Vasiliy D.; Zagrebin, Mikhail A.; Grünebohm, Anna; Entel, Peter
The effect of Co- and Cr-doping on magnetic and magnetocaloric poperties of Ni-Mn-(In, Ga, Sn, and Al) Heusler alloys has been theoretically studied by combining first principles with Monte Carlo approaches. The magnetic and magnetocaloric properties are obtained as a function of temperature and magnetic field using a mixed type of Potts and Blume-Emery-Griffiths model where the model parameters are obtained from ab initio calculations. The Monte Carlo calculations allowed to make predictions of a giant inverse magnetocaloric effect in partially new hypothetical magnetic Heusler alloys across the martensitic transformation.
Deng, Yong; Luo, Zhaoyang; Jiang, Xu; Xie, Wenhao; Luo, Qingming
2015-07-01
We propose a method based on a decoupled fluorescence Monte Carlo model for constructing fluorescence Jacobians to enable accurate quantification of fluorescence targets within turbid media. The effectiveness of the proposed method is validated using two cylindrical phantoms enclosing fluorescent targets within homogeneous and heterogeneous background media. The results demonstrate that our method can recover relative concentrations of the fluorescent targets with higher accuracy than the perturbation fluorescence Monte Carlo method. This suggests that our method is suitable for quantitative fluorescence diffuse optical tomography, especially for in vivo imaging of fluorophore targets for diagnosis of different diseases and abnormalities.
Particle tracking acceleration via signed distance fields in direct-accelerated geometry Monte Carlo
Shriwise, Patrick C.; Davis, Andrew; Jacobson, Lucas J.; ...
2017-08-26
Computer-aided design (CAD)-based Monte Carlo radiation transport is of value to the nuclear engineering community for its ability to conduct transport on high-fidelity models of nuclear systems, but it is more computationally expensive than native geometry representations. This work describes the adaptation of a rendering data structure, the signed distance field, as a geometric query tool for accelerating CAD-based transport in the direct-accelerated geometry Monte Carlo toolkit. Demonstrations of its effectiveness are shown for several problems. The beginnings of a predictive model for the data structure's utilization based on various problem parameters is also introduced.
A New Approach to Modeling Densities and Equilibria of Ice and Gas Hydrate Phases
NASA Astrophysics Data System (ADS)
Zyvoloski, G.; Lucia, A.; Lewis, K. C.
2011-12-01
The Gibbs-Helmholtz Constrained (GHC) equation is a new cubic equation of state that was recently derived by Lucia (2010) and Lucia et al. (2011) by constraining the energy parameter in the Soave form of the Redlich-Kwong equation to satisfy the Gibbs-Helmholtz equation. The key attributes of the GHC equation are: 1) It is a multi-scale equation because it uses the internal energy of departure, UD, as a natural bridge between the molecular and bulk phase length scales. 2) It does not require acentric factors, volume translation, regression of parameters to experimental data, binary (kij) interaction parameters, or other forms of empirical correlations. 3) It is a predictive equation of state because it uses a database of values of UD determined from NTP Monte Carlo simulations. 4) It can readily account for differences in molecular size and shape. 5) It has been successfully applied to non-electrolyte mixtures as well as weak and strong aqueous electrolyte mixtures over wide ranges of temperature, pressure and composition to predict liquid density and phase equilibrium with up to four phases. 6) It has been extensively validated with experimental data. 7) The AAD% error between predicted and experimental liquid density is 1% while the AAD% error in phase equilibrium predictions is 2.5%. 8) It has been used successfully within the subsurface flow simulation program FEHM. In this work we describe recent extensions of the multi-scale predictive GHC equation to modeling the phase densities and equilibrium behavior of hexagonal ice and gas hydrates. In particular, we show that radial distribution functions, which can be determined by NTP Monte Carlo simulations, can be used to establish correct standard state fugacities of 1h ice and gas hydrates. From this, it is straightforward to determine both the phase density of ice or gas hydrates as well as any equilibrium involving ice and/or hydrate phases. A number of numerical results for mixtures of N2, O2, CH4, CO2, water, and NaCl in permafrost conditions are presented to illustrate the predictive capabilities of the multi-scale GHC equation. In particular, we show that the GHC equation correctly predicts 1) The density of 1h ice and methane hydrate to within 1%. 2) The melting curve for hexagonal ice. 3) The hydrate-gas phase co-existence curve. 4) Various phase equilibrium involving ice and hydrate phases. We also show that the GHC equation approach can be readily incorporated into subsurface flow simulation programs like FEHM to predict the behavior of permafrost and other reservoirs where ice and/or hydrates are present. Many geometric illustrations are used to elucidate key concepts. References A. Lucia, A Multi-Scale Gibbs Helmholtz Constrained Cubic Equation of State. J. Thermodynamics: Special Issue on Advances in Gas Hydrate Thermodynamics and Transport Properties. Available on-line [doi:10.1155/2010/238365]. A. Lucia, B.M. Bonk, A. Roy and R.R. Waterman, A Multi-Scale Framework for Multi-Phase Equilibrium Flash. Comput. Chem. Engng. In press.
Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien
2012-01-01
Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.
Modeling and analysis of personal exposures to VOC mixtures using copulas
Su, Feng-Chiao; Mukherjee, Bhramar; Batterman, Stuart
2014-01-01
Environmental exposures typically involve mixtures of pollutants, which must be understood to evaluate cumulative risks, that is, the likelihood of adverse health effects arising from two or more chemicals. This study uses several powerful techniques to characterize dependency structures of mixture components in personal exposure measurements of volatile organic compounds (VOCs) with aims of advancing the understanding of environmental mixtures, improving the ability to model mixture components in a statistically valid manner, and demonstrating broadly applicable techniques. We first describe characteristics of mixtures and introduce several terms, including the mixture fraction which represents a mixture component's share of the total concentration of the mixture. Next, using VOC exposure data collected in the Relationship of Indoor Outdoor and Personal Air (RIOPA) study, mixtures are identified using positive matrix factorization (PMF) and by toxicological mode of action. Dependency structures of mixture components are examined using mixture fractions and modeled using copulas, which address dependencies of multiple variables across the entire distribution. Five candidate copulas (Gaussian, t, Gumbel, Clayton, and Frank) are evaluated, and the performance of fitted models was evaluated using simulation and mixture fractions. Cumulative cancer risks are calculated for mixtures, and results from copulas and multivariate lognormal models are compared to risks calculated using the observed data. Results obtained using the RIOPA dataset showed four VOC mixtures, representing gasoline vapor, vehicle exhaust, chlorinated solvents and disinfection by-products, and cleaning products and odorants. Often, a single compound dominated the mixture, however, mixture fractions were generally heterogeneous in that the VOC composition of the mixture changed with concentration. Three mixtures were identified by mode of action, representing VOCs associated with hematopoietic, liver and renal tumors. Estimated lifetime cumulative cancer risks exceeded 10−3 for about 10% of RIOPA participants. Factors affecting the likelihood of high concentration mixtures included city, participant ethnicity, and house air exchange rates. The dependency structures of the VOC mixtures fitted Gumbel (two mixtures) and t (four mixtures) copulas, types that emphasize tail dependencies. Significantly, the copulas reproduced both risk predictions and exposure fractions with a high degree of accuracy, and performed better than multivariate lognormal distributions. Copulas may be the method of choice for VOC mixtures, particularly for the highest exposures or extreme events, cases that poorly fit lognormal distributions and that represent the greatest risks. PMID:24333991
Paganetti, H; Jiang, H; Lee, S Y; Kooy, H M
2004-07-01
Monte Carlo dosimetry calculations are essential methods in radiation therapy. To take full advantage of this tool, the beam delivery system has to be simulated in detail and the initial beam parameters have to be known accurately. The modeling of the beam delivery system itself opens various areas where Monte Carlo calculations prove extremely helpful, such as for design and commissioning of a therapy facility as well as for quality assurance verification. The gantry treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH) were modeled in detail using the GEANT4.5.2 Monte Carlo code. For this purpose, various novel solutions for simulating irregular shaped objects in the beam path, like contoured scatterers, patient apertures or patient compensators, were found. The four-dimensional, in time and space, simulation of moving parts, such as the modulator wheel, was implemented. Further, the appropriate physics models and cross sections for proton therapy applications were defined. We present comparisons between measured data and simulations. These show that by modeling the treatment nozzle with millimeter accuracy, it is possible to reproduce measured dose distributions with an accuracy in range and modulation width, in the case of a spread-out Bragg peak (SOBP), of better than 1 mm. The excellent agreement demonstrates that the simulations can even be used to generate beam data for commissioning treatment planning systems. The Monte Carlo nozzle model was used to study mechanical optimization in terms of scattered radiation and secondary radiation in the design of the nozzles. We present simulations on the neutron background. Further, the Monte Carlo calculations supported commissioning efforts in understanding the sensitivity of beam characteristics and how these influence the dose delivered. We present the sensitivity of dose distributions in water with respect to various beam parameters and geometrical misalignments. This allows the definition of tolerances for quality assurance and the design of quality assurance procedures.
Martín-Calvo, Ana; García-Pérez, Elena; Manuel Castillo, Juan; Calero, Sofia
2008-12-21
We use Monte Carlo simulations to study the adsorption and separation of the natural gas components in IRMOF-1 and Cu-BTC metal-organic frameworks. We computed the adsorption isotherms of pure components, binary, and five-component mixtures analyzing the siting of the molecules in the structure for the different loadings. The bulk compositions studied for the mixtures were 50 : 50 and 90 : 10 for CH4-CO2, 90 : 10 for N2-CO2, and 95 : 2.0 : 1.5 : 1.0 : 0.5 for the CH4-C2H6-N2-CO2-C3H8 mixture. We choose this composition because it is similar to an average sample of natural gas. Our simulations show that CO2 is preferentially adsorbed over propane, ethane, methane and N2 in the complete pressure range under study. Longer alkanes are favored over shorter alkanes and the lowest adsorption corresponds to N2. Though IRMOF-1 has a significantly higher adsorption capacity than Cu-BTC, the adsorption selectivity of CO2 over CH4 and N2 is found to be higher in the latter, proving that the separation efficiency is largely affected by the shape, the atomic composition and the type of linkers of the structure.
Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain
2018-02-01
Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.
Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny
2011-01-01
Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
ERIC Educational Resources Information Center
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
Monte Carlo algorithms for Brownian phylogenetic models.
Horvilleur, Benjamin; Lartillot, Nicolas
2014-11-01
Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Estimation and Model Selection for Finite Mixtures of Latent Interaction Models
ERIC Educational Resources Information Center
Hsu, Jui-Chen
2011-01-01
Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
NASA Astrophysics Data System (ADS)
Rognlien, Thomas; Rensink, Marvin
2016-10-01
Transport simulations for the edge plasma of tokamaks and other magnetic fusion devices requires the coupling of plasma and recycling or injected neutral gas. There are various neutral models used for this purpose, e.g., atomic fluid model, a Monte Carlo particle models, transition/escape probability methods, and semi-analytic models. While the Monte Carlo method is generally viewed as the most accurate, it is time consuming, which becomes even more demanding for device simulations of high densities and size typical of fusion power plants because the neutral collisional mean-free path becomes very small. Here we examine the behavior of an extended fluid neutral model for hydrogen that includes both atoms and molecules, which easily includes nonlinear neutral-neutral collision effects. In addition to the strong charge-exchange between hydrogen atoms and ions, elastic scattering is included among all species. Comparisons are made with the DEGAS 2 Monte Carlo code. Work performed for U.S. DoE by LLNL under Contract DE-AC52-07NA27344.
Ajmani, Subhash; Rogers, Stephen C; Barley, Mark H; Burgess, Andrew N; Livingstone, David J
2010-09-17
In our earlier work, we have demonstrated that it is possible to characterize binary mixtures using single component descriptors by applying various mixing rules. We also showed that these methods were successful in building predictive QSPR models to study various mixture properties of interest. Here in, we developed a QSPR model of an excess thermodynamic property of binary mixtures i.e. excess molar volume (V(E) ). In the present study, we use a set of mixture descriptors which we earlier designed to specifically account for intermolecular interactions between the components of a mixture and applied successfully to the prediction of infinite-dilution activity coefficients using neural networks (part 1 of this series). We obtain a significant QSPR model for the prediction of excess molar volume (V(E) ) using consensus neural networks and five mixture descriptors. We find that hydrogen bond and thermodynamic descriptors are the most important in determining excess molar volume (V(E) ), which is in line with the theory of intermolecular forces governing excess mixture properties. The results also suggest that the mixture descriptors utilized herein may be sufficient to model a wide variety of properties of binary and possibly even more complex mixtures. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hybrid Monte Carlo-Diffusion Method For Light Propagation in Tissue With a Low-Scattering Region
NASA Astrophysics Data System (ADS)
Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji
2003-06-01
The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.
Hybrid Monte Carlo-diffusion method for light propagation in tissue with a low-scattering region.
Hayashi, Toshiyuki; Kashio, Yoshihiko; Okada, Eiji
2003-06-01
The heterogeneity of the tissues in a head, especially the low-scattering cerebrospinal fluid (CSF) layer surrounding the brain has previously been shown to strongly affect light propagation in the brain. The radiosity-diffusion method, in which the light propagation in the CSF layer is assumed to obey the radiosity theory, has been employed to predict the light propagation in head models. Although the CSF layer is assumed to be a nonscattering region in the radiosity-diffusion method, fine arachnoid trabeculae cause faint scattering in the CSF layer in real heads. A novel approach, the hybrid Monte Carlo-diffusion method, is proposed to calculate the head models, including the low-scattering region in which the light propagation does not obey neither the diffusion approximation nor the radiosity theory. The light propagation in the high-scattering region is calculated by means of the diffusion approximation solved by the finite-element method and that in the low-scattering region is predicted by the Monte Carlo method. The intensity and mean time of flight of the detected light for the head model with a low-scattering CSF layer calculated by the hybrid method agreed well with those by the Monte Carlo method, whereas the results calculated by means of the diffusion approximation included considerable error caused by the effect of the CSF layer. In the hybrid method, the time-consuming Monte Carlo calculation is employed only for the thin CSF layer, and hence, the computation time of the hybrid method is dramatically shorter than that of the Monte Carlo method.
QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.
Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng
2018-05-01
Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluating Mixture Modeling for Clustering: Recommendations and Cautions
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2011-01-01
This article provides a large-scale investigation into several of the properties of mixture-model clustering techniques (also referred to as latent class cluster analysis, latent profile analysis, model-based clustering, probabilistic clustering, Bayesian classification, unsupervised learning, and finite mixture models; see Vermunt & Magdison,…
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
NASA Astrophysics Data System (ADS)
Baldacchino, Tara; Worden, Keith; Rowson, Jennifer
2017-02-01
A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.
Nys, Charlotte; Janssen, Colin R; De Schamphelaere, Karel A C
2017-01-01
Recently, several bioavailability-based models have been shown to predict acute metal mixture toxicity with reasonable accuracy. However, the application of such models to chronic mixture toxicity is less well established. Therefore, we developed in the present study a chronic metal mixture bioavailability model (MMBM) by combining the existing chronic daphnid bioavailability models for Ni, Zn, and Pb with the independent action (IA) model, assuming strict non-interaction between the metals for binding at the metal-specific biotic ligand sites. To evaluate the predictive capacity of the MMBM, chronic (7d) reproductive toxicity of Ni-Zn-Pb mixtures to Ceriodaphnia dubia was investigated in four different natural waters (pH range: 7-8; Ca range: 1-2 mM; Dissolved Organic Carbon range: 5-12 mg/L). In each water, mixture toxicity was investigated at equitoxic metal concentration ratios as well as at environmental (i.e. realistic) metal concentration ratios. Statistical analysis of mixture effects revealed that observed interactive effects depended on the metal concentration ratio investigated when evaluated relative to the concentration addition (CA) model, but not when evaluated relative to the IA model. This indicates that interactive effects observed in an equitoxic experimental design cannot always be simply extrapolated to environmentally realistic exposure situations. Generally, the IA model predicted Ni-Zn-Pb mixture toxicity more accurately than the CA model. Overall, the MMBM predicted Ni-Zn-Pb mixture toxicity (expressed as % reproductive inhibition relative to a control) in 85% of the treatments with less than 20% error. Moreover, the MMBM predicted chronic toxicity of the ternary Ni-Zn-Pb mixture at least equally accurately as the toxicity of the individual metal treatments (RMSE Mix = 16; RMSE Zn only = 18; RMSE Ni only = 17; RMSE Pb only = 23). Based on the present study, we believe MMBMs can be a promising tool to account for the effects of water chemistry on metal mixture toxicity during chronic exposure and could be used in metal risk assessment frameworks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Study of the Transition Flow Regime using Monte Carlo Methods
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS
NASA Astrophysics Data System (ADS)
Hof, Carsten
2009-05-01
We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.
Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling
NASA Astrophysics Data System (ADS)
Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.
2018-02-01
A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.
The proton therapy nozzles at Samsung Medical Center: A Monte Carlo simulation study using TOPAS
NASA Astrophysics Data System (ADS)
Chung, Kwangzoo; Kim, Jinsung; Kim, Dae-Hyun; Ahn, Sunghwan; Han, Youngyih
2015-07-01
To expedite the commissioning process of the proton therapy system at Samsung Medical Center (SMC), we have developed a Monte Carlo simulation model of the proton therapy nozzles by using TOol for PArticle Simulation (TOPAS). At SMC proton therapy center, we have two gantry rooms with different types of nozzles: a multi-purpose nozzle and a dedicated scanning nozzle. Each nozzle has been modeled in detail following the geometry information provided by the manufacturer, Sumitomo Heavy Industries, Ltd. For this purpose, the novel features of TOPAS, such as the time feature or the ridge filter class, have been used, and the appropriate physics models for proton nozzle simulation have been defined. Dosimetric properties, like percent depth dose curve, spreadout Bragg peak (SOBP), and beam spot size, have been simulated and verified against measured beam data. Beyond the Monte Carlo nozzle modeling, we have developed an interface between TOPAS and the treatment planning system (TPS), RayStation. An exported radiotherapy (RT) plan from the TPS is interpreted by using an interface and is then translated into the TOPAS input text. The developed Monte Carlo nozzle model can be used to estimate the non-beam performance, such as the neutron background, of the nozzles. Furthermore, the nozzle model can be used to study the mechanical optimization of the design of the nozzle.
Kilinc, Deniz; Demir, Alper
2017-08-01
The brain is extremely energy efficient and remarkably robust in what it does despite the considerable variability and noise caused by the stochastic mechanisms in neurons and synapses. Computational modeling is a powerful tool that can help us gain insight into this important aspect of brain mechanism. A deep understanding and computational design tools can help develop robust neuromorphic electronic circuits and hybrid neuroelectronic systems. In this paper, we present a general modeling framework for biological neuronal circuits that systematically captures the nonstationary stochastic behavior of ion channels and synaptic processes. In this framework, fine-grained, discrete-state, continuous-time Markov chain models of both ion channels and synaptic processes are treated in a unified manner. Our modeling framework features a mechanism for the automatic generation of the corresponding coarse-grained, continuous-state, continuous-time stochastic differential equation models for neuronal variability and noise. Furthermore, we repurpose non-Monte Carlo noise analysis techniques, which were previously developed for analog electronic circuits, for the stochastic characterization of neuronal circuits both in time and frequency domain. We verify that the fast non-Monte Carlo analysis methods produce results with the same accuracy as computationally expensive Monte Carlo simulations. We have implemented the proposed techniques in a prototype simulator, where both biological neuronal and analog electronic circuits can be simulated together in a coupled manner.
Minimal model for the secondary structures and conformational conversions in proteins
NASA Astrophysics Data System (ADS)
Imamura, Hideo
Better understanding of protein folding process can provide physical insights on the function of proteins and makes it possible to benefit from genetic information accumulated so far. Protein folding process normally takes place in less than seconds but even seconds are beyond reach of current computational power for simulations on a system of all-atom detail. Hence, to model and explore protein folding process it is crucial to construct a proper model that can adequately describe the physical process and mechanism for the relevant time scale. We discuss the reduced off-lattice model that can express _-helix and ?-hairpin conformations defined solely by a given sequence in order to investigate a protein folding mechanism of conformations such as a ?-hairpin and also to investigate conformational conversions in proteins. The first two chapters introduce and review essential concepts in protein folding modelling physical interaction in proteins, various simple models, and also review computational methods, in particular, the Metropolis Monte Carlo method, its dynamic interpretation and thermodynamic Monte Carlo algorithms. Chapter 3 describes the minimalist model that represents both _-helix and ?-sheet conformations using simple potentials. The native conformation can be specified by the sequence without particular conformational biases to a reference state. In Chapter 4, the model is used to investigate the folding mechanism of ?-hairpins exhaustively using the dynamic Monte Carlo and a thermodynamic Monte Carlo method an effcient combination of the multicanonical Monte Carlo and the weighted histogram analysis method. We show that the major folding pathways and folding rate depend on the location of a hydrophobic. The conformational conversions between _-helix and ?-sheet conformations are examined in Chapter 5 and 6. First, the conformational conversion due to mutation in a non-hydrophobic system and then the conformational conversion due to mutation with a hydrophobic pair at a different position at various temperatures are examined.
Rasch Mixture Models for DIF Detection
Strobl, Carolin; Zeileis, Achim
2014-01-01
Rasch mixture models can be a useful tool when checking the assumption of measurement invariance for a single Rasch model. They provide advantages compared to manifest differential item functioning (DIF) tests when the DIF groups are only weakly correlated with the manifest covariates available. Unlike in single Rasch models, estimation of Rasch mixture models is sensitive to the specification of the ability distribution even when the conditional maximum likelihood approach is used. It is demonstrated in a simulation study how differences in ability can influence the latent classes of a Rasch mixture model. If the aim is only DIF detection, it is not of interest to uncover such ability differences as one is only interested in a latent group structure regarding the item difficulties. To avoid any confounding effect of ability differences (or impact), a new score distribution for the Rasch mixture model is introduced here. It ensures the estimation of the Rasch mixture model to be independent of the ability distribution and thus restricts the mixture to be sensitive to latent structure in the item difficulties only. Its usefulness is demonstrated in a simulation study, and its application is illustrated in a study of verbal aggression. PMID:29795819
Monte Carlo Studies of Phase Separation in Compressible 2-dim Ising Models
NASA Astrophysics Data System (ADS)
Mitchell, S. J.; Landau, D. P.
2006-03-01
Using high resolution Monte Carlo simulations, we study time-dependent domain growth in compressible 2-dim ferromagnetic (s=1/2) Ising models with continuous spin positions and spin-exchange moves [1]. Spins interact with slightly modified Lennard-Jones potentials, and we consider a model with no lattice mismatch and one with 4% mismatch. For comparison, we repeat calculations for the rigid Ising model [2]. For all models, large systems (512^2) and long times (10^ 6 MCS) are examined over multiple runs, and the growth exponent is measured in the asymptotic scaling regime. For the rigid model and the compressible model with no lattice mismatch, the growth exponent is consistent with the theoretically expected value of 1/3 [1] for Model B type growth. However, we find that non-zero lattice mismatch has a significant and unexpected effect on the growth behavior.Supported by the NSF.[1] D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, second ed. (Cambridge University Press, New York, 2005).[2] J. Amar, F. Sullivan, and R.D. Mountain, Phys. Rev. B 37, 196 (1988).
Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data
ERIC Educational Resources Information Center
Kim, Su-Young; Kim, Jee-Seon
2012-01-01
This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…
Mixture Modeling: Applications in Educational Psychology
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Hodis, Flaviu A.
2016-01-01
Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…
NASA Astrophysics Data System (ADS)
Clements, Aspen R.; Berk, Brandon; Cooke, Ilsa R.; Garrod, Robin T.
2018-02-01
Using an off-lattice kinetic Monte Carlo model we reproduce experimental laboratory trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature. Extrapolation of the model to conditions appropriate to protoplanetary disks and interstellar dark clouds indicate that these ices may be less porous than laboratory ices.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Quantum interference and Monte Carlo simulations of multiparticle production
NASA Astrophysics Data System (ADS)
Bialas, A.; Krzywicki, A.
1995-02-01
We show that the effects of quantum interference can be implemented in Monte Carlo generators by modelling the generalized Wigner functions. A specific prescription for an appropriate modification of the weights of events produced by standard generators is proposed.
Local Solutions in the Estimation of Growth Mixture Models
ERIC Educational Resources Information Center
Hipp, John R.; Bauer, Daniel J.
2006-01-01
Finite mixture models are well known to have poorly behaved likelihood functions featuring singularities and multiple optima. Growth mixture models may suffer from fewer of these problems, potentially benefiting from the structure imposed on the estimated class means and covariances by the specified growth model. As demonstrated here, however,…
Finite-size scaling study of the two-dimensional Blume-Capel model
NASA Astrophysics Data System (ADS)
Beale, Paul D.
1986-02-01
The phase diagram of the two-dimensional Blume-Capel model is investigated by using the technique of phenomenological finite-size scaling. The location of the tricritical point and the values of the critical and tricritical exponents are determined. The location of the tricritical point (Tt=0.610+/-0.005, Dt=1.9655+/-0.0010) is well outside the error bars for the value quoted in previous Monte Carlo simulations but in excellent agreement with more recent Monte Carlo renormalization-group results. The values of the critical and tricritical exponents, with the exception of the leading thermal tricritical exponent, are in excellent agreement with previous calculations, conjectured values, and Monte Carlo renormalization-group studies.
Gutzwiller Monte Carlo approach for a critical dissipative spin model
NASA Astrophysics Data System (ADS)
Casteels, Wim; Wilson, Ryan M.; Wouters, Michiel
2018-06-01
We use the Gutzwiller Monte Carlo approach to simulate the dissipative X Y Z model in the vicinity of a dissipative phase transition. This approach captures classical spatial correlations together with the full on-site quantum behavior while neglecting nonlocal quantum effects. By considering finite two-dimensional lattices of various sizes, we identify a ferromagnetic and two paramagnetic phases, in agreement with earlier studies. The greatly reduced numerical complexity of the Gutzwiller Monte Carlo approach facilitates efficient simulation of relatively large lattice sizes. The inclusion of the spatial correlations allows to capture parts of the phase diagram that are completely missed by the widely applied Gutzwiller decoupling of the density matrix.
Phase diagram of two-dimensional hard rods from fundamental mixed measure density functional theory
NASA Astrophysics Data System (ADS)
Wittmann, René; Sitta, Christoph E.; Smallenburg, Frank; Löwen, Hartmut
2017-10-01
A density functional theory for the bulk phase diagram of two-dimensional orientable hard rods is proposed and tested against Monte Carlo computer simulation data. In detail, an explicit density functional is derived from fundamental mixed measure theory and freely minimized numerically for hard discorectangles. The phase diagram, which involves stable isotropic, nematic, smectic, and crystalline phases, is obtained and shows good agreement with the simulation data. Our functional is valid for a multicomponent mixture of hard particles with arbitrary convex shapes and provides a reliable starting point to explore various inhomogeneous situations of two-dimensional hard rods and their Brownian dynamics.
NASA Astrophysics Data System (ADS)
Toropov, Andrey A.; Toropova, Alla P.
2018-06-01
Predictive model of logP for Pt(II) and Pt(IV) complexes built up with the Monte Carlo method using the CORAL software has been validated with six different splits into the training and validation sets. The improving of the predictive potential of models for six different splits has been obtained using so-called index of ideality of correlation. The suggested models give possibility to extract molecular features, which cause the increase or vice versa decrease of the logP.
The anesthetic action of some polyhalogenated ethers-Monte Carlo method based QSAR study.
Golubović, Mlađan; Lazarević, Milan; Zlatanović, Dragan; Krtinić, Dane; Stoičkov, Viktor; Mladenović, Bojan; Milić, Dragan J; Sokolović, Dušan; Veselinović, Aleksandar M
2018-04-13
Up to this date, there has been an ongoing debate about the mode of action of general anesthetics, which have postulated many biological sites as targets for their action. However, postoperative nausea and vomiting are common problems in which inhalational agents may have a role in their development. When a mode of action is unknown, QSAR modelling is essential in drug development. To investigate the aspects of their anesthetic, QSAR models based on the Monte Carlo method were developed for a set of polyhalogenated ethers. Until now, their anesthetic action has not been completely defined, although some hypotheses have been suggested. Therefore, a QSAR model should be developed on molecular fragments that contribute to anesthetic action. QSAR models were built on the basis of optimal molecular descriptors based on the SMILES notation and local graph invariants, whereas the Monte Carlo optimization method with three random splits into the training and test set was applied for model development. Different methods, including novel Index of ideality correlation, were applied for the determination of the robustness of the model and its predictive potential. The Monte Carlo optimization process was capable of being an efficient in silico tool for building up a robust model of good statistical quality. Molecular fragments which have both positive and negative influence on anesthetic action were determined. The presented study can be useful in the search for novel anesthetics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.
1993-01-01
Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Ellis; Derek Gaston; Benoit Forget
In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes.more » An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.« less
Pestana, Luis Ruiz; Minnetian, Natalie; Lammers, Laura Nielsen; ...
2018-01-02
When driven out of equilibrium, many diverse systems can form complex spatial and dynamical patterns, even in the absence of attractive interactions. Using kinetic Monte Carlo simulations, we investigate the phase behavior of a binary system of particles of dissimilar size confined between semiflexible planar surfaces, in which the nanoconfinement introduces a non-local coupling between particles, which we model as an activation energy barrier to diffusion that decreases with the local fraction of the larger particle. The system autonomously reaches a cyclical non-equilibrium state characterized by the formation and dissolution of metastable micelle-like clusters with the small particles in themore » core and the large ones in the surrounding corona. The power spectrum of the fluctuations in the aggregation number exhibits 1/f noise reminiscent of self-organized critical systems. Finally, we suggest that the dynamical metastability of the micellar structures arises from an inversion of the energy landscape, in which the relaxation dynamics of one of the species induces a metastable phase for the other species.« less
Three validation metrics for automated probabilistic image segmentation of brain tumours
Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.
2005-01-01
SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482
Baj-Rossi, Camilla; De Micheli, Giovanni; Carrara, Sandro
2012-01-01
We report on the electrochemical detection of anti-cancer drugs in human serum with sensitivity values in the range of 8–925 nA/μM. Multi-walled carbon nanotubes were functionalized with three different cytochrome P450 isoforms (CYP1A2, CYP2B6, and CYP3A4). A model used to effectively describe the cytochrome P450 deposition onto carbon nanotubes was confirmed by Monte Carlo simulations. Voltammetric measurements were performed in phosphate buffer saline (PBS) as well as in human serum, giving well-defined current responses upon addition of increasing concentrations of anti-cancer drugs. The results assert the capability to measure concentration of drugs in the pharmacological ranges in human serum. Another important result is the possibility to detect pairs of drugs present in the same sample, which is highly required in case of therapies with high side-effects risk and in anti-cancer pharmacological treatments based on mixtures of different drugs. Our technology holds potentials for inexpensive multi-panel drug-monitoring in personalized therapy. PMID:22778656
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pestana, Luis Ruiz; Minnetian, Natalie; Lammers, Laura Nielsen
When driven out of equilibrium, many diverse systems can form complex spatial and dynamical patterns, even in the absence of attractive interactions. Using kinetic Monte Carlo simulations, we investigate the phase behavior of a binary system of particles of dissimilar size confined between semiflexible planar surfaces, in which the nanoconfinement introduces a non-local coupling between particles, which we model as an activation energy barrier to diffusion that decreases with the local fraction of the larger particle. The system autonomously reaches a cyclical non-equilibrium state characterized by the formation and dissolution of metastable micelle-like clusters with the small particles in themore » core and the large ones in the surrounding corona. The power spectrum of the fluctuations in the aggregation number exhibits 1/f noise reminiscent of self-organized critical systems. Finally, we suggest that the dynamical metastability of the micellar structures arises from an inversion of the energy landscape, in which the relaxation dynamics of one of the species induces a metastable phase for the other species.« less
Fracture Simulation of Highly Crosslinked Polymer Networks: Triglyceride-Based Adhesives
NASA Astrophysics Data System (ADS)
Lorenz, Christian; Stevens, Mark; Wool, Richard
2003-03-01
The ACRES program at the U. of Delaware has shown that triglyceride oils derived from plants are a favorable alternative to the traditional adhesives. The triglyceride networks are formed from an initial mixture of styrene monomers, free-radical initiators and triglycerides. We have performed simulations to study the effect of physical composition and physical characteristics of the triglyceride network on the strength of triglyceride network. A coarse-grained, bead-spring model of the triglyceride system is used. The average triglyceride consists of 6 beads per chain, the styrenes are represented as a single bead and the initiators are two bead chains. The polymer network is formed using an off-lattice 3D Monte Carlo simulation, in which the initiators activate the styrene and triglyceride reactive sites and then bonds are randomly formed between the styrene and active triglyceride monomers producing a highly crosslinked polymer network. Molecular dynamics simulations of the network under tensile and shear strains were performed to determine the strength as a function of the network composition. The relationship between the network structure and its strength will also be discussed.
Prokhorov, Alexander
2012-05-01
This paper proposes a three-component bidirectional reflectance distribution function (3C BRDF) model consisting of diffuse, quasi-specular, and glossy components for calculation of effective emissivities of blackbody cavities and then investigates the properties of the new reflection model. The particle swarm optimization method is applied for fitting a 3C BRDF model to measured BRDFs. The model is incorporated into the Monte Carlo ray-tracing algorithm for isothermal cavities. Finally, the paper compares the results obtained using the 3C model and the conventional specular-diffuse model of reflection.
Local Hamiltonian Monte Carlo study of the massive schwinger model, the decoupling of heavy flavours
NASA Astrophysics Data System (ADS)
Ranft, J.
1983-12-01
The massive Schwinger model with two flavours is studied using the local hamiltonian lattice Monte Carlo method. Chiral symmetry breaking is studied using the fermion condensate as order parameter. For a small ratio of the two fermion masses, degeneracy of the two flavours is found. For a large ratio of the masses, the heavy flavour decouples and the light fermion behaves like in the one flavour Schwinger model. On leave from Sektion Physik, Karl-Marx-Universität, Leipzig, GDR.
Monte Carlo renormalization-group study of the Baxter-Wu model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novotny, M.A.; Landau, D.P.; Swendsen, R.H.
1982-07-01
The effectiveness of a Monte Carlo renormalization-group method is studied by applying it to the Baxter-Wu model (Ising spins on a triangular lattice with three-spin interactions). The calculations yield three relevent eigenvalues in good agreement with exact or conjectured results. We demonstrate that the method is capable of distinguishing between models expected to be in the same universality class, when one of them (four-state Potts) exhibits logarithmic corrections to the usual power-law singularities and the other (Baxter-Wu) does not.
NASA Technical Reports Server (NTRS)
Gayda, J.; Srolovitz, D. J.
1989-01-01
This paper presents a specialized microstructural lattice model, MCFET (Monte Carlo finite element technique), which simulates microstructural evolution in materials in which strain energy has an important role in determining morphology. The model is capable of accounting for externally applied stress, surface tension, misfit, elastic inhomogeneity, elastic anisotropy, and arbitrary temperatures. The MCFET analysis was found to compare well with the results of analytical calculations of the equilibrium morphologies of isolated particles in an infinite matrix.
Density matrix Monte Carlo modeling of quantum cascade lasers
NASA Astrophysics Data System (ADS)
Jirauschek, Christian
2017-10-01
By including elements of the density matrix formalism, the semiclassical ensemble Monte Carlo method for carrier transport is extended to incorporate incoherent tunneling, known to play an important role in quantum cascade lasers (QCLs). In particular, this effect dominates electron transport across thick injection barriers, which are frequently used in terahertz QCL designs. A self-consistent model for quantum mechanical dephasing is implemented, eliminating the need for empirical simulation parameters. Our modeling approach is validated against available experimental data for different types of terahertz QCL designs.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Cluster kinetics model for mixtures of glassformers
NASA Astrophysics Data System (ADS)
Brenskelle, Lisa A.; McCoy, Benjamin J.
2007-10-01
For glassformers we propose a binary mixture relation for parameters in a cluster kinetics model previously shown to represent pure compound data for viscosity and dielectric relaxation as functions of either temperature or pressure. The model parameters are based on activation energies and activation volumes for cluster association-dissociation processes. With the mixture parameters, we calculated dielectric relaxation times and compared the results to experimental values for binary mixtures. Mixtures of sorbitol and glycerol (seven compositions), sorbitol and xylitol (three compositions), and polychloroepihydrin and polyvinylmethylether (three compositions) were studied.
A Lattice Kinetic Monte Carlo Solver for First-Principles Microkinetic Trend Studies
Hoffmann, Max J.; Bligaard, Thomas
2018-01-22
Here, mean-field microkinetic models in combination with Brønsted–Evans–Polanyi like scaling relations have proven highly successful in identifying catalyst materials with good or promising reactivity and selectivity. Analysis of the microkinetic model by means of lattice kinetic Monte Carlo promises a faithful description of a range of atomistic features involving short-range ordering of species in the vicinity of an active site. In this paper, we use the “fruit fly” example reaction of CO oxidation on fcc(111) transition and coinage metals to motivate and develop a lattice kinetic Monte Carlo solver suitable for the numerically challenging case of vastly disparate rate constants.more » As a result, we show that for the case of infinitely fast diffusion and absence of adsorbate-adsorbate interaction it is, in fact, possible to match the prediction of the mean-field-theory method and the lattice kinetic Monte Carlo method. As a corollary, we conclude that lattice kinetic Monte Carlo simulations of surface chemical reactions are most likely to provide additional insight over mean-field simulations if diffusion limitations or adsorbate–adsorbate interactions have a significant influence on the mixing of the adsorbates.« less
A Lattice Kinetic Monte Carlo Solver for First-Principles Microkinetic Trend Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, Max J.; Bligaard, Thomas
Here, mean-field microkinetic models in combination with Brønsted–Evans–Polanyi like scaling relations have proven highly successful in identifying catalyst materials with good or promising reactivity and selectivity. Analysis of the microkinetic model by means of lattice kinetic Monte Carlo promises a faithful description of a range of atomistic features involving short-range ordering of species in the vicinity of an active site. In this paper, we use the “fruit fly” example reaction of CO oxidation on fcc(111) transition and coinage metals to motivate and develop a lattice kinetic Monte Carlo solver suitable for the numerically challenging case of vastly disparate rate constants.more » As a result, we show that for the case of infinitely fast diffusion and absence of adsorbate-adsorbate interaction it is, in fact, possible to match the prediction of the mean-field-theory method and the lattice kinetic Monte Carlo method. As a corollary, we conclude that lattice kinetic Monte Carlo simulations of surface chemical reactions are most likely to provide additional insight over mean-field simulations if diffusion limitations or adsorbate–adsorbate interactions have a significant influence on the mixing of the adsorbates.« less
NASA Technical Reports Server (NTRS)
Stolarski, R. S.; Butler, D. M.; Rundel, R. D.
1977-01-01
A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.
Su, Peiran; Eri, Qitai; Wang, Qiang
2014-04-10
Optical roughness was introduced into the bidirectional reflectance distribution function (BRDF) model to simulate the reflectance characteristics of thermal radiation. The optical roughness BRDF model stemmed from the influence of surface roughness and wavelength on the ray reflectance calculation. This model was adopted to simulate real metal emissivity. The reverse Monte Carlo method was used to display the distribution of reflectance rays. The numerical simulations showed that the optical roughness BRDF model can calculate the wavelength effect on emissivity and simulate the real metal emissivity variance with incidence angles.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
Monte Carlo calculation of skyshine'' neutron dose from ALS (Advanced Light Source)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moin-Vasiri, M.
1990-06-01
This report discusses the following topics on skyshine'' neutron dose from ALS: Sources of radiation; ALS modeling for skyshine calculations; MORSE Monte-Carlo; Implementation of MORSE; Results of skyshine calculations from storage ring; and Comparison of MORSE shielding calculations.
Nonlinear Structured Growth Mixture Models in M"plus" and OpenMx
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam; Estabrook, Ryne
2010-01-01
Growth mixture models (GMMs; B. O. Muthen & Muthen, 2000; B. O. Muthen & Shedden, 1999) are a combination of latent curve models (LCMs) and finite mixture models to examine the existence of latent classes that follow distinct developmental patterns. GMMs are often fit with linear, latent basis, multiphase, or polynomial change models…
The Potential of Growth Mixture Modelling
ERIC Educational Resources Information Center
Muthen, Bengt
2006-01-01
The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
MCNP (Monte Carlo Neutron Photon) capabilities for nuclear well logging calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forster, R.A.; Little, R.C.; Briesmeister, J.F.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. The general-purpose continuous-energy Monte Carlo code MCNP (Monte Carlo Neutron Photon), part of the LARTCS, provides a computational predictive capability for many applications of interest to the nuclear well logging community. The generalized three-dimensional geometry of MCNP is well suited for borehole-tool models. SABRINA, another component of the LARTCS, is a graphics code that can be used to interactively create a complex MCNP geometry. Users can define many source and tally characteristics with standard MCNP features. The time-dependent capabilitymore » of the code is essential when modeling pulsed sources. Problems with neutrons, photons, and electrons as either single particle or coupled particles can be calculated with MCNP. The physics of neutron and photon transport and interactions is modeled in detail using the latest available cross-section data. A rich collections of variance reduction features can greatly increase the efficiency of a calculation. MCNP is written in FORTRAN 77 and has been run on variety of computer systems from scientific workstations to supercomputers. The next production version of MCNP will include features such as continuous-energy electron transport and a multitasking option. Areas of ongoing research of interest to the well logging community include angle biasing, adaptive Monte Carlo, improved discrete ordinates capabilities, and discrete ordinates/Monte Carlo hybrid development. Los Alamos has requested approval by the Department of Energy to create a Radiation Transport Computational Facility under their User Facility Program to increase external interactions with industry, universities, and other government organizations. 21 refs.« less
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model
NASA Technical Reports Server (NTRS)
White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.
1989-01-01
A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.
Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept
NASA Technical Reports Server (NTRS)
Thipphavong, David
2010-01-01
Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.
NASA Astrophysics Data System (ADS)
Yoshino, Takashi; Laumonier, Mickael; McIsaac, Elizabeth; Katsura, Tomoo
2010-07-01
Electrical impedance measurements were performed on two types of partial molten samples with basaltic and carbonatitic melts in a Kawai-type multi-anvil apparatus in order to investigate melt fraction-conductivity relationships and melt distribution of the partial molten mantle peridotite under high pressure. The silicate samples were composed of San Carlos olivine with various amounts of mid-ocean ridge basalt (MORB), and the carbonate samples were a mixture of San Carlos olivine with various amounts of carbonatite. High-pressure experiments on the silicate and carbonate systems were performed up to 1600 K at 1.5 GPa and up to at least 1650 K at 3 GPa, respectively. The sample conductivity increased with increasing melt fraction. Carbonatite-bearing samples show approximately one order of magnitude higher conductivity than basalt-bearing ones at the similar melt fraction. A linear relationship between log conductivity ( σbulk) and log melt fraction ( ϕ) can be expressed well by the Archie's law (Archie, 1942) ( σbulk/ σmelt = Cϕn) with parameters C = 0.68 and 0.97, n = 0.87 and 1.13 for silicate and carbonate systems, respectively. Comparison of the electrical conductivity data with theoretical predictions for melt distribution indicates that the model assuming that the grain boundary is completely wetted by melt is the most preferable melt geometry. The gradual change of conductivity with melt fraction suggests no permeability jump due to melt percolation at a certain melt fraction. The melt fraction of the partial molten region in the upper mantle can be estimated to be 1-3% and ˜ 0.3% for basaltic melt and carbonatite melt, respectively.
New hybrid voxelized/analytical primitive in Monte Carlo simulations for medical applications
NASA Astrophysics Data System (ADS)
Bert, Julien; Lemaréchal, Yannick; Visvikis, Dimitris
2016-05-01
Monte Carlo simulations (MCS) applied in particle physics play a key role in medical imaging and particle therapy. In such simulations, particles are transported through voxelized phantoms derived from predominantly patient CT images. However, such voxelized object representation limits the incorporation of fine elements, such as artificial implants from CAD modeling or anatomical and functional details extracted from other imaging modalities. In this work we propose a new hYbrid Voxelized/ANalytical primitive (YVAN) that combines both voxelized and analytical object descriptions within the same MCS, without the need to simultaneously run two parallel simulations, which is the current gold standard methodology. Given that YVAN is simply a new primitive object, it does not require any modifications on the underlying MC navigation code. The new proposed primitive was assessed through a first simple MCS. Results from the YVAN primitive were compared against an MCS using a pure analytical geometry and the layer mass geometry concept. A perfect agreement was found between these simulations, leading to the conclusion that the new hybrid primitive is able to accurately and efficiently handle phantoms defined by a mixture of voxelized and analytical objects. In addition, two application-based evaluation studies in coronary angiography and intra-operative radiotherapy showed that the use of YVAN was 6.5% and 12.2% faster than the layered mass geometry method, respectively, without any associated loss of accuracy. However, the simplification advantages and differences in computational time improvements obtained with YVAN depend on the relative proportion of the analytical and voxelized structures used in the simulation as well as the size and number of triangles used in the description of the analytical object meshes.
Kinetic Monte Carlo simulations of fluorine and vacancies concentration at the CeO2(111) surface
NASA Astrophysics Data System (ADS)
Mattiello, S.; Kolling, S.; Heiliger, C.
2017-09-01
Recently, a new identification of the experimental depressions of scanning tunnelling microscopy images on the {{CeO}}2(111) surface as fluorine impurities has been proposed in Kullgren et al (2014 Phys. Rev. Lett. 112 156102). In particular, the high immobility of the depressions seems to be in contradiction with the low diffusion barrier for the oxygen vacancies. Consequently, the oxygen vacancies concentration has to disappear. The first aim of this paper is to confirm dynamically the recent interpretation of the experimental finding. For this purpose, we investigate the competition between fluorine and oxygen vacancies using two dimensional kinetic Monte Carlo simulations (kMC) as compared to an appropriate Langmuir model. We calculate the concentration of the vacancies and of the fluorine for the surface (111) of {{CeO}}2 for a UHV condition as a function of the fluorine-oxygen mixture in the gas phase as well as of the binding energies of fluorine and oxygen. We found that at a temperature of T=573 {{K}}, at which the experimental measurements were conducted, vacancies cannot exist. This confirms the possibility of fluorine impurities in Kullgren et al (2014 Phys. Rev. Lett. 112 156102). The second aim of the present paper is to perform a first dynamical estimation of the fluorine binding energy value {E}{Fl} that allows one to describe the experimental data in Pieper et al (2012 Phys. Chem. Chem. Phys. 14 15361). Using 2D-kMC simulations, we found {E}{Fl}\\in [-5.53,-5.27] {eV} which can be used for comparison to density functional theory calculations in further works.
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders.
Viveros-Méndez, P X; Gil-Villegas, Alejandro; Aranda-Espinoza, S
2014-07-28
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e(2)/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions Lx ≈ Ly and Lz = 5Lx, where Lx, Ly, and Lz are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
Development of PBPK Models for Gasoline in Adult and ...
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of
Continuum percolation of polydisperse rods in quadrupole fields: Theory and simulations.
Finner, Shari P; Kotsev, Mihail I; Miller, Mark A; van der Schoot, Paul
2018-01-21
We investigate percolation in mixtures of nanorods in the presence of external fields that align or disalign the particles with the field axis. Such conditions are found in the formulation and processing of nanocomposites, where the field may be electric, magnetic, or due to elongational flow. Our focus is on the effect of length polydispersity, which-in the absence of a field-is known to produce a percolation threshold that scales with the inverse weight average of the particle length. Using a model of non-interacting spherocylinders in conjunction with connectedness percolation theory, we show that a quadrupolar field always increases the percolation threshold and that the universal scaling with the inverse weight average no longer holds if the field couples to the particle length. Instead, the percolation threshold becomes a function of higher moments of the length distribution, where the order of the relevant moments crucially depends on the strength and type of field applied. The theoretical predictions compare well with the results of our Monte Carlo simulations, which eliminate finite size effects by exploiting the fact that the universal scaling of the wrapping probability function holds even in anisotropic systems. Theory and simulation demonstrate that the percolation threshold of a polydisperse mixture can be lower than that of the individual components, confirming recent work based on a mapping onto a Bethe lattice as well as earlier computer simulations involving dipole fields. Our work shows how the formulation of nanocomposites may be used to compensate for the adverse effects of aligning fields that are inevitable under practical manufacturing conditions.
Lee, Il-Hyung; Saha, Suvrajit; Polley, Anirban; Huang, Hector; Mayor, Satyajit; Rao, Madan; Groves, Jay T
2015-03-26
Lipid/cholesterol mixtures derived from cell membranes as well as their synthetic reconstitutions exhibit well-defined miscibility phase transitions and critical phenomena near physiological temperatures. This suggests that lipid/cholesterol-mediated phase separation plays a role in the organization of live cell membranes. However, macroscopic lipid-phase separation is not generally observed in cell membranes, and the degree to which properties of isolated lipid mixtures are preserved in the cell membrane remain unknown. A fundamental property of phase transitions is that the variation of tagged particle diffusion with temperature exhibits an abrupt change as the system passes through the transition, even when the two phases are distributed in a nanometer-scale emulsion. We support this using a variety of Monte Carlo and atomistic simulations on model lipid membrane systems. However, temperature-dependent fluorescence correlation spectroscopy of labeled lipids and membrane-anchored proteins in live cell membranes shows a consistently smooth increase in the diffusion coefficient as a function of temperature. We find no evidence of a discrete miscibility phase transition throughout a wide range of temperatures: 14-37 °C. This contrasts the behavior of giant plasma membrane vesicles (GPMVs) blebbed from the same cells, which do exhibit phase transitions and macroscopic phase separation. Fluorescence lifetime analysis of a DiI probe in both cases reveals a significant environmental difference between the live cell and the GPMV. Taken together, these data suggest the live cell membrane may avoid the miscibility phase transition inherent to its lipid constituents by actively regulating physical parameters, such as tension, in the membrane.
Continuum percolation of polydisperse rods in quadrupole fields: Theory and simulations
NASA Astrophysics Data System (ADS)
Finner, Shari P.; Kotsev, Mihail I.; Miller, Mark A.; van der Schoot, Paul
2018-01-01
We investigate percolation in mixtures of nanorods in the presence of external fields that align or disalign the particles with the field axis. Such conditions are found in the formulation and processing of nanocomposites, where the field may be electric, magnetic, or due to elongational flow. Our focus is on the effect of length polydispersity, which—in the absence of a field—is known to produce a percolation threshold that scales with the inverse weight average of the particle length. Using a model of non-interacting spherocylinders in conjunction with connectedness percolation theory, we show that a quadrupolar field always increases the percolation threshold and that the universal scaling with the inverse weight average no longer holds if the field couples to the particle length. Instead, the percolation threshold becomes a function of higher moments of the length distribution, where the order of the relevant moments crucially depends on the strength and type of field applied. The theoretical predictions compare well with the results of our Monte Carlo simulations, which eliminate finite size effects by exploiting the fact that the universal scaling of the wrapping probability function holds even in anisotropic systems. Theory and simulation demonstrate that the percolation threshold of a polydisperse mixture can be lower than that of the individual components, confirming recent work based on a mapping onto a Bethe lattice as well as earlier computer simulations involving dipole fields. Our work shows how the formulation of nanocomposites may be used to compensate for the adverse effects of aligning fields that are inevitable under practical manufacturing conditions.
Million-body star cluster simulations: comparisons between Monte Carlo and direct N-body
NASA Astrophysics Data System (ADS)
Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer
2016-12-01
We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes `frozen' (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.
Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S
2007-07-09
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.
Reduced detonation kinetics and detonation structure in one- and multi-fuel gaseous mixtures
NASA Astrophysics Data System (ADS)
Fomin, P. A.; Trotsyuk, A. V.; Vasil'ev, A. A.
2017-10-01
Two-step approximate models of chemical kinetics of detonation combustion of (i) one-fuel (CH4/air) and (ii) multi-fuel gaseous mixtures (CH4/H2/air and CH4/CO/air) are developed for the first time. The models for multi-fuel mixtures are proposed for the first time. Owing to the simplicity and high accuracy, the models can be used in multi-dimensional numerical calculations of detonation waves in corresponding gaseous mixtures. The models are in consistent with the second law of thermodynamics and Le Chatelier’s principle. Constants of the models have a clear physical meaning. Advantages of the kinetic model for detonation combustion of methane has been demonstrated via numerical calculations of a two-dimensional structure of the detonation wave in a stoichiometric and fuel-rich methane-air mixtures and stoichiometric methane-oxygen mixture. The dominant size of the detonation cell, determines in calculations, is in good agreement with all known experimental data.
Lu, Xiaoqing; Jin, Dongliang; Wei, Shuxian; Zhang, Mingmin; Zhu, Qing; Shi, Xiaofan; Deng, Zhigang; Guo, Wenyue; Shen, Wenzhong
2015-01-21
The effect of edge-functionalization on the competitive adsorption of a binary CO2-CH4 mixture in nanoporous carbons (NPCs) has been investigated for the first time by combining density functional theory (DFT) and grand canonical Monte Carlo (GCMC) simulation. Our results show that edge-functionalization has a more positive effect on the single-component adsorption of CO2 than CH4, therefore significantly enhancing the selectivity of CO2 over CH4, in the order of NH2-NPC > COOH-NPC > OH-NPC > H-NPC > NPC at low pressure. The enhanced adsorption originates essentially from the effects of (1) the conducive environment with a large pore size and an effective accessible surface area, (2) the high electronegativity/electropositivity, (3) the strong adsorption energy, and (4) the large electrostatic contribution, due to the inductive effect/direct interaction of the embedded edge-functionalized groups. The larger difference from these effects results in the higher competitive adsorption advantage of CO2 in the binary CO2-CH4 mixture. Temperature has a negative effect on the gas adsorption, but no obvious influence on the electrostatic contribution on selectivity. With the increase of pressure, the selectivity of CO2 over CH4 first decreases sharply and subsequently flattens out to a constant value. This work highlights the potential of edge-functionalized NPCs in competitive adsorption, capture, and separation for the binary CO2-CH4 mixture, and provides an effective and superior alternative strategy in the design and screening of adsorbent materials for carbon capture and storage.
ERIC Educational Resources Information Center
Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk
2008-01-01
Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sansourekidou, P; Allen, C
2015-06-15
Purpose: To evaluate the Raystation v4.51 Electron Monte Carlo algorithm for Varian Trilogy, IX and 2100 series linear accelerators and commission for clinical use. Methods: Seventy two water and forty air scans were acquired with a water tank in the form of profiles and depth doses, as requested by vendor. Data was imported into Rayphysics beam modeling module. Energy spectrum was modeled using seven parameters. Contamination photons were modeled using five parameters. Source phase space was modeled using six parameters. Calculations were performed in clinical version 4.51 and percent depth dose curves and profiles were extracted to be compared tomore » water tank measurements. Sensitivity tests were performed for all parameters. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Results: Model accuracy for air profiles is poor in the shoulder and penumbra region. However, model accuracy for water scans is acceptable. All energies and cones are within 2%/2mm for 90% of the points evaluated. Source phase space parameters have a cumulative effect. To achieve distributions with satisfactory smoothness level a 0.1cm grid and 3,000,000 particle histories were used for commissioning calculations. Calculation time was approximately 3 hours per energy. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use for the Varian accelerators listed. Results are inferior to Elekta Electron Monte Carlo modeling. Known issues were reported to Raysearch and will be resolved in upcoming releases. Auto-modeling is limited to open cone depth dose curves and needs expansion.« less
NASA Astrophysics Data System (ADS)
da Cunha, Antonio R.; Duarte, Evandro L.; Lamy, M. Teresa; Coutinho, Kaline
2014-08-01
We combined theoretical and experimental studies to elucidate the important deprotonation process of Emodin in water. We used the UV/Visible spectrophotometric titration curves to obtain its pKa values, pKa1 = 8.0 ± 0.1 and pKa2 = 10.9 ± 0.2. Additionally, we obtained the pKa values of Emodin in the water-methanol mixture (1:3v/v). We give a new interpretation of the experimental data, obtaining apparent pKa1 = 6.2 ± 0.1, pKa2 = 8.3 ± 0.1 and pKa3 > 12.7. Performing quantum mechanics calculations for all possible deprotonation sites and tautomeric isomers of Emodin in vacuum and in water, we identified the sites of the first and second deprotonation. We calculated the standard deprotonation free energy of Emodin in water and the pKa1, using an explicit model of the solvent, with Free Energy Perturbation theory in Monte Carlo simulations obtaining, ΔGaq = 12.1 ± 1.4 kcal/mol and pKa1 = 8.7 ± 0.9. With the polarizable continuum model for the solvent, we obtained ΔGaq = 11.6 ± 1.0 kcal/mol and pKa1 = 8.3 ± 0.7. Both solvent models gave theoretical results in very good agreement with the experimental values.
Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases
NASA Astrophysics Data System (ADS)
Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.
2018-01-01
We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimental ‘benchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.
A Monte Carlo simulation study of associated liquid crystals
NASA Astrophysics Data System (ADS)
Berardi, R.; Fehervari, M.; Zannoni, C.
We have performed a Monte Carlo simulation study of a system of ellipsoidal particles with donor-acceptor sites modelling complementary hydrogen-bonding groups in real molecules. We have considered elongated Gay-Berne particles with terminal interaction sites allowing particles to associate and form dimers. The changes in the phase transitions and in the molecular organization and the interplay between orientational ordering and dimer formation are discussed. Particle flip and dimer moves have been used to increase the convergency rate of the Monte Carlo (MC) Markov chain.
Using Stan for Item Response Theory Models
ERIC Educational Resources Information Center
Ames, Allison J.; Au, Chi Hang
2018-01-01
Stan is a flexible probabilistic programming language providing full Bayesian inference through Hamiltonian Monte Carlo algorithms. The benefits of Hamiltonian Monte Carlo include improved efficiency and faster inference, when compared to other MCMC software implementations. Users can interface with Stan through a variety of computing…
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
NASA Astrophysics Data System (ADS)
Lai, Bo-Lun; Sheu, Rong-Jiun
2017-09-01
Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
Data decomposition of Monte Carlo particle transport simulations via tally servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romano, Paul K.; Siegel, Andrew R.; Forget, Benoit
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithmmore » in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.« less
Response Matrix Monte Carlo for electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.
1990-11-01
A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less
Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis
2005-07-25
analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for
Microstructure and hydrogen bonding in water-acetonitrile mixtures.
Mountain, Raymond D
2010-12-16
The connection of hydrogen bonding between water and acetonitrile in determining the microheterogeneity of the liquid mixture is examined using NPT molecular dynamics simulations. Mixtures for six, rigid, three-site models for acetonitrile and one water model (SPC/E) were simulated to determine the amount of water-acetonitrile hydrogen bonding. Only one of the six acetonitrile models (TraPPE-UA) was able to reproduce both the liquid density and the experimental estimates of hydrogen bonding derived from Raman scattering of the CN stretch band or from NMR quadrupole relaxation measurements. A simple modification of the acetonitrile model parameters for the models that provided poor estimates produced hydrogen-bonding results consistent with experiments for two of the models. Of these, only one of the modified models also accurately determined the density of the mixtures. The self-diffusion coefficient of liquid acetonitrile provided a final winnowing of the modified model and the successful, unmodified model. The unmodified model is provisionally recommended for simulations of water-acetonitrile mixtures.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
Logarithmic speed-up of relaxation in A -B annihilation with exclusion
NASA Astrophysics Data System (ADS)
Dandekar, Rahul
2018-04-01
We show that the decay of the density of active particles in the reaction A +B →0 in one dimension, with exclusion interaction, results in logarithmic corrections to the expected power law decay, when the starting initial condition (i.c.) is periodic. It is well known that the late-time density of surviving particles goes as t-1 /4 with random initial conditions, and as t-1 /2 with alternating initial conditions (A B A B A B ⋯ ). We show that the decay for periodic i.c.'s made of longer blocks (AnBnAnBn⋯ ) do not show a pure power-law decay when n is even. By means of first-passage Monte Carlo simulations, and a mapping to a q -state coarsening model which can be solved in the independent interval approximation (IIA), we show that the late-time decay of the density of surviving particles goes as t-1 /2[ln(t ) ] -1 for n even, but as t-1 /2 when n is odd. We relate this kinetic symmetry breaking in the Glauber Ising model. We also see a very slow crossover from a t-1 /2[ln(t ) ] -1 regime to eventual t-1 /2 behavior for i.c.'s made of mixtures of odd- and even-length blocks.
Itinerant electrons in the Coulomb phase
NASA Astrophysics Data System (ADS)
Jaubert, L. D. C.; Piatecki, Swann; Haque, Masudul; Moessner, R.
2012-02-01
We study the interplay between magnetic frustration and itinerant electrons. For example, how does the coupling to mobile charges modify the properties of a spin liquid, and does the underlying frustration favor insulating or conducting states? Supported by Monte Carlo simulations, our goal is in particular to provide an analytical picture of the mechanisms involved. The models under consideration exhibit Coulomb phases in two and three dimensions, where the itinerant electrons are coupled to the localized spins via double exchange interactions. Because of the Hund coupling, magnetic loops naturally emerge from the Coulomb phase and serve as conducting channels for the mobile electrons, leading to doping-dependent rearrangements of the loop ensemble in order to minimize the electronic kinetic energy. At low electron density ρ, the double exchange coupling mainly tends to segment the very long loops winding around the system into smaller ones while it gradually lifts the extensive degeneracy of the Coulomb phase with increasing ρ. For higher doping, the results are strongly lattice dependent, displaying loop crystals with a given loop length for some specific values of ρ. By varying ρ, they can melt into different mixtures of these loop crystals, recovering extensive degeneracy in the process. Finally, we contrast this to the qualitatively different behavior of analogous models on kagome or triangular lattices.
Determining the spatial altitude of the hydraulic fractures.
NASA Astrophysics Data System (ADS)
Khamiev, Marsel; Kosarev, Victor; Goncharova, Galina
2016-04-01
Mathematical modeling and numerical simulation are the most widely used approaches for the solving geological problems. They imply software tools which are based on Monte Carlo method. The results of this project presents shows the possibility of using PNL tool to determine fracturing location. The modeled media is a homogeneous rock (limestone) cut by a vertical borehole (d=216 mm) with metal casing 9 mm thick. The cement sheath is 35 mm thick. The borehole is filled with fresh water. The rock mass is cut by crack, filled with a mixture of doped (gadolinium oxide Gd2O3) proppant (75%) and water (25%). A pulse neutron logging (PNL) tool is used for quality control in hydraulic fracturing operations. It includes a fast neutron source (so-called "neutron generator") and a set of thermal (or epithermal) neutron-sensing devices, forming the so-called near (ND) and far (FD) detectors. To evaluate neutron properties various segments (sectors) of the rock mass, the detector must register only neutrons that come from this very formation. It's possible if detecting block includes some (6 for example) thermal neutron detectors arranged circumferentially inside the tool. As a result we get few independent well logs, each accords with define rock sector. Afterwards synthetic logs processing we can determine spatial position of the hydraulic fracture.
Cross sections for Scattering and Mobility of OH- and H3 O+ ions in H2 O
NASA Astrophysics Data System (ADS)
Petrovic, Zoran; Stojanovic, Vladimir; Maric, Dragana; Jovanovic, Jasmina
2016-05-01
Modelling of plasmas in liquids and in biological and medical applications requires data for scattering of all charged and energetic particles in water vapour. We present swarm parameters for OH- and H3 O+, as representatives of principal negative and positive ions at low pressures in an attempt to provide the data that are not yet available. We applied Denpoh-Nanbu procedure to calculate cross section sets for collisions of OH- and H3 O+ ions with H2 O molecule. Swarm parameters for OH- and H3 O+ ions in H2 O are calculated by using a well tested Monte Carlo code for a range of E / N(E -electric field, N-gas density) at temperature T = 295 K, in the low pressure limit. Non-conservative processes were shown to strongly influence the transport properties even for OH- ions above the average energy of 0.2 eV(E / N >200 Td). The data are valid for low pressure water vapour or small amounts in mixtures. They will provide a basis for calculating properties of ion-water molecule clusters that are most commonly found at higher pressures and for modelling of discharges in liquids. Acknowledgment to Ministry of Education, Science and Technology of Serbia.
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
Tijmstra, Jesper; Bolsinova, Maria; Jeon, Minjeong
2018-01-10
This article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance. To illustrate this approach, we consider a two-class mixture model, where a person's responses to Likert-scale items containing a neutral middle category are either modeled using a generalized partial credit model, or through an IRTree model. In the first model, the middle category ("neither agree nor disagree") is taken to be qualitatively similar to the other categories, and is taken to provide information about the person's endorsement. In the second model, the middle category is taken to be qualitatively different and to reflect a nonresponse choice, which is modeled using an additional latent variable that captures a person's willingness to respond. The mixture model is studied using simulation studies and is applied to an empirical example.
NASA Astrophysics Data System (ADS)
Akasaka, Ryo
This study presents a simple multi-fluid model for Helmholtz energy equations of state. The model contains only three parameters, whereas rigorous multi-fluid models developed for several industrially important mixtures usually have more than 10 parameters and coefficients. Therefore, the model can be applied to mixtures where experimental data is limited. Vapor-liquid equilibrium (VLE) of the following seven mixtures have been successfully correlated with the model: CO2 + difluoromethane (R-32), CO2 + trifluoromethane (R-23), CO2 + fluoromethane (R-41), CO2 + 1,1,1,2- tetrafluoroethane (R-134a), CO2 + pentafluoroethane (R-125), CO2 + 1,1-difluoroethane (R-152a), and CO2 + dimethyl ether (DME). The best currently available equations of state for the pure refrigerants were used for the correlations. For all mixtures, average deviations in calculated bubble-point pressures from experimental values are within 2%. The simple multi-fluid model will be helpful for design and simulations of heat pumps and refrigeration systems using the mixtures as working fluid.
Population Synthesis of Radio and Y-ray Millisecond Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Gonthier, Peter L.; Billman, C.; Harding, A. K.
2013-04-01
We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and γ-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of ten radio surveys and by Fermi, predicting the MSP birth rate in the Galaxy. We follow a similar set of assumptions that we have used in previous, more constrained Monte Carlo simulations. The parameters associated with the birth distributions such as those for the accretion rate, magnetic field and period distributions are also free to vary. With the large set of free parameters, we employ Markov Chain Monte Carlo simulations to explore the large and small worlds of the parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and γ-ray pulsar characteristics. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics
NASA Astrophysics Data System (ADS)
Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger
2017-09-01
Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.
NASA Astrophysics Data System (ADS)
Eddowes, M. H.; Mills, T. N.; Delpy, D. T.
1995-05-01
A Monte Carlo model of light backscattered from turbid media has been used to simulate the effects of weak localization in biological tissues. A validation technique is used that implies that for the scattering and absorption coefficients and for refractive index mismatches found in tissues, the Monte Carlo method is likely to provide more accurate results than the methods previously used. The model also has the ability to simulate the effects of various illumination profiles and other laboratory-imposed conditions. A curve-fitting routine has been developed that might be used to extract the optical coefficients from the angular intensity profiles seen in experiments on turbid biological tissues, data that could be obtained in vivo.
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
ERIC Educational Resources Information Center
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J
2017-10-05
A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Queen, Eric M.; Omara, Thomas M.
1990-01-01
A realization of a stochastic atmosphere model for use in simulations is presented. The model provides pressure, density, temperature, and wind velocity as a function of latitude, longitude, and altitude, and is implemented in a three degree of freedom simulation package. This implementation is used in the Monte Carlo simulation of an aeroassisted orbital transfer maneuver and results are compared to those of a more traditional approach.
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
Rupert, C.P.; Miller, C.T.
2008-01-01
We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519
PDF approach for compressible turbulent reacting flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.
1993-01-01
The objective of the present work is to develop a probability density function (pdf) turbulence model for compressible reacting flows for use with a CFD flow solver. The probability density function of the species mass fraction and enthalpy are obtained by solving a pdf evolution equation using a Monte Carlo scheme. The pdf solution procedure is coupled with a compressible CFD flow solver which provides the velocity and pressure fields. A modeled pdf equation for compressible flows, capable of capturing shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed, and an averaging procedure is developed to provide smooth Monte-Carlo solutions to ensure convergence. Two supersonic diffusion flames are studied using the proposed pdf model and the results are compared with experimental data; marked improvements over CFD solutions without pdf are observed. Preliminary applications of pdf to 3D flows are also reported.
Three-dimensional electron microscopy simulation with the CASINO Monte Carlo software.
Demers, Hendrix; Poirier-Demers, Nicolas; Couture, Alexandre Réal; Joly, Dany; Guilmain, Marc; de Jonge, Niels; Drouin, Dominique
2011-01-01
Monte Carlo softwares are widely used to understand the capabilities of electron microscopes. To study more realistic applications with complex samples, 3D Monte Carlo softwares are needed. In this article, the development of the 3D version of CASINO is presented. The software feature a graphical user interface, an efficient (in relation to simulation time and memory use) 3D simulation model, accurate physic models for electron microscopy applications, and it is available freely to the scientific community at this website: www.gel.usherbrooke.ca/casino/index.html. It can be used to model backscattered, secondary, and transmitted electron signals as well as absorbed energy. The software features like scan points and shot noise allow the simulation and study of realistic experimental conditions. This software has an improved energy range for scanning electron microscopy and scanning transmission electron microscopy applications. Copyright © 2011 Wiley Periodicals, Inc.
Three-Dimensional Electron Microscopy Simulation with the CASINO Monte Carlo Software
Demers, Hendrix; Poirier-Demers, Nicolas; Couture, Alexandre Réal; Joly, Dany; Guilmain, Marc; de Jonge, Niels; Drouin, Dominique
2011-01-01
Monte Carlo softwares are widely used to understand the capabilities of electron microscopes. To study more realistic applications with complex samples, 3D Monte Carlo softwares are needed. In this paper, the development of the 3D version of CASINO is presented. The software feature a graphical user interface, an efficient (in relation to simulation time and memory use) 3D simulation model, accurate physic models for electron microscopy applications, and it is available freely to the scientific community at this website: www.gel.usherbrooke.ca/casino/index.html. It can be used to model backscattered, secondary, and transmitted electron signals as well as absorbed energy. The software features like scan points and shot noise allow the simulation and study of realistic experimental conditions. This software has an improved energy range for scanning electron microscopy and scanning transmission electron microscopy applications. PMID:21769885
Monte Carlo Simulation of THz Multipliers
NASA Technical Reports Server (NTRS)
East, J.; Blakey, P.
1997-01-01
Schottky Barrier diode frequency multipliers are critical components in submillimeter and Thz space based earth observation systems. As the operating frequency of these multipliers has increased, the agreement between design predictions and experimental results has become poorer. The multiplier design is usually based on a nonlinear model using a form of harmonic balance and a model for the Schottky barrier diode. Conventional voltage dependent lumped element models do a poor job of predicting THz frequency performance. This paper will describe a large signal Monte Carlo simulation of Schottky barrier multipliers. The simulation is a time dependent particle field Monte Carlo simulation with ohmic and Schottky barrier boundary conditions included that has been combined with a fixed point solution for the nonlinear circuit interaction. The results in the paper will point out some important time constants in varactor operation and will describe the effects of current saturation and nonlinear resistances on multiplier operation.
MUSiC - A general search for deviations from monte carlo predictions in CMS
NASA Astrophysics Data System (ADS)
Biallass, Philipp A.; CMS Collaboration
2009-06-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
Extracting Spurious Latent Classes in Growth Mixture Modeling with Nonnormal Errors
ERIC Educational Resources Information Center
Guerra-Peña, Kiero; Steinley, Douglas
2016-01-01
Growth mixture modeling is generally used for two purposes: (1) to identify mixtures of normal subgroups and (2) to approximate oddly shaped distributions by a mixture of normal components. Often in applied research this methodology is applied to both of these situations indistinctly: using the same fit statistics and likelihood ratio tests. This…
A deterministic partial differential equation model for dose calculation in electron radiotherapy.
Duclous, R; Dubroca, B; Frank, M
2010-07-07
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of delta electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.
A deterministic partial differential equation model for dose calculation in electron radiotherapy
NASA Astrophysics Data System (ADS)
Duclous, R.; Dubroca, B.; Frank, M.
2010-07-01
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of δ electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
NASA Astrophysics Data System (ADS)
De Napoli, M.; Romano, F.; D'Urso, D.; Licciardello, T.; Agodi, C.; Candiano, G.; Cappuzzello, F.; Cirrone, G. A. P.; Cuttone, G.; Musumarra, A.; Pandola, L.; Scuderi, V.
2014-12-01
When a carbon beam interacts with human tissues, many secondary fragments are produced into the tumor region and the surrounding healthy tissues. Therefore, in hadrontherapy precise dose calculations require Monte Carlo tools equipped with complex nuclear reaction models. To get realistic predictions, however, simulation codes must be validated against experimental results; the wider the dataset is, the more the models are finely tuned. Since no fragmentation data for tissue-equivalent materials at Fermi energies are available in literature, we measured secondary fragments produced by the interaction of a 55.6 MeV u-1 12C beam with thick muscle and cortical bone targets. Three reaction models used by the Geant4 Monte Carlo code, the Binary Light Ions Cascade, the Quantum Molecular Dynamic and the Liege Intranuclear Cascade, have been benchmarked against the collected data. In this work we present the experimental results and we discuss the predictive power of the above mentioned models.
Raman Monte Carlo simulation for light propagation for tissue with embedded objects
NASA Astrophysics Data System (ADS)
Periyasamy, Vijitha; Jaafar, Humaira Bte; Pramanik, Manojit
2018-02-01
Monte Carlo (MC) stimulation is one of the prominent simulation technique and is rapidly becoming the model of choice to study light-tissue interaction. Monte Carlo simulation for light transport in multi-layered tissue (MCML) is adapted and modelled with different geometry by integrating embedded objects of various shapes (i.e., sphere, cylinder, cuboid and ellipsoid) into the multi-layered structure. These geometries would be useful in providing a realistic tissue structure such as modelling for lymph nodes, tumors, blood vessels, head and other simulation medium. MC simulations were performed on various geometric medium. Simulation of MCML with embedded object (MCML-EO) was improvised for propagation of the photon in the defined medium with Raman scattering. The location of Raman photon generation is recorded. Simulations were experimented on a modelled breast tissue with tumor (spherical and ellipsoidal) and blood vessels (cylindrical). Results were presented in both A-line and B-line scans for embedded objects to determine spatial location where Raman photons were generated. Studies were done for different Raman probabilities.
Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.
Stierstorfer, Karl
2018-01-01
To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.
A Monte Carlo model for the internal dosimetry of choroid plexuses in nuclear medicine procedures.
Amato, Ernesto; Cicone, Francesco; Auditore, Lucrezia; Baldari, Sergio; Prior, John O; Gnesin, Silvano
2018-05-01
Choroid plexuses are vascular structures located in the brain ventricles, showing specific uptake of some diagnostic and therapeutic radiopharmaceuticals currently under clinical investigation, such as integrin-binding arginine-glycine-aspartic acid (RGD) peptides. No specific geometry for choroid plexuses has been implemented in commercially available software for internal dosimetry. The aims of the present study were to assess the dependence of absorbed dose to the choroid plexuses on the organ geometry implemented in Monte Carlo simulations, and to propose an analytical model for the internal dosimetry of these structures for 18 F, 64 Cu, 67 Cu, 68 Ga, 90 Y, 131 I and 177 Lu nuclides. A GAMOS Monte Carlo simulation based on direct organ segmentation was taken as the gold standard to validate a second simulation based on a simplified geometrical model of the choroid plexuses. Both simulations were compared with the OLINDA/EXM sphere model. The gold standard and the simplified geometrical model gave similar dosimetry results (dose difference < 3.5%), indicating that the latter can be considered as a satisfactory approximation of the real geometry. In contrast, the sphere model systematically overestimated the absorbed dose compared to both Monte Carlo models (range: 4-50% dose difference), depending on the isotope energy and organ mass. Therefore, the simplified geometric model was adopted to introduce an analytical approach for choroid plexuses dosimetry in the mass range 2-16 g. The proposed model enables the estimation of the choroid plexuses dose by a simple bi-parametric function, once the organ mass and the residence time of the radiopharmaceutical under investigation are provided. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
RNA folding kinetics using Monte Carlo and Gillespie algorithms.
Clote, Peter; Bayegan, Amir H
2018-04-01
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see http://bioinformatics.bc.edu/clote/RNAexpNumNbors .