Sample records for perfect bayesian equilibrium

  1. A Bayesian test for Hardy–Weinberg equilibrium of biallelic X-chromosomal markers

    PubMed Central

    Puig, X; Ginebra, J; Graffelman, J

    2017-01-01

    The X chromosome is a relatively large chromosome, harboring a lot of genetic information. Much of the statistical analysis of X-chromosomal information is complicated by the fact that males only have one copy. Recently, frequentist statistical tests for Hardy–Weinberg equilibrium have been proposed specifically for dealing with markers on the X chromosome. Bayesian test procedures for Hardy–Weinberg equilibrium for the autosomes have been described, but Bayesian work on the X chromosome in this context is lacking. This paper gives the first Bayesian approach for testing Hardy–Weinberg equilibrium with biallelic markers at the X chromosome. Marginal and joint posterior distributions for the inbreeding coefficient in females and the male to female allele frequency ratio are computed, and used for statistical inference. The paper gives a detailed account of the proposed Bayesian test, and illustrates it with data from the 1000 Genomes project. In that implementation, a novel approach to tackle multiple testing from a Bayesian perspective through posterior predictive checks is used. PMID:28900292

  2. Modeling Misbehavior in Cooperative Diversity: A Dynamic Game Approach

    NASA Astrophysics Data System (ADS)

    Dehnie, Sintayehu; Memon, Nasir

    2009-12-01

    Cooperative diversity protocols are designed with the assumption that terminals always help each other in a socially efficient manner. This assumption may not be valid in commercial wireless networks where terminals may misbehave for selfish or malicious intentions. The presence of misbehaving terminals creates a social-dilemma where terminals exhibit uncertainty about the cooperative behavior of other terminals in the network. Cooperation in social-dilemma is characterized by a suboptimal Nash equilibrium where wireless terminals opt out of cooperation. Hence, without establishing a mechanism to detect and mitigate effects of misbehavior, it is difficult to maintain a socially optimal cooperation. In this paper, we first examine effects of misbehavior assuming static game model and show that cooperation under existing cooperative protocols is characterized by a noncooperative Nash equilibrium. Using evolutionary game dynamics we show that a small number of mutants can successfully invade a population of cooperators, which indicates that misbehavior is an evolutionary stable strategy (ESS). Our main goal is to design a mechanism that would enable wireless terminals to select reliable partners in the presence of uncertainty. To this end, we formulate cooperative diversity as a dynamic game with incomplete information. We show that the proposed dynamic game formulation satisfied the conditions for the existence of perfect Bayesian equilibrium.

  3. Bayesian Estimation of Fish Disease Prevalence from Pooled Samples Incorporating Sensitivity and Specificity

    NASA Astrophysics Data System (ADS)

    Williams, Christopher J.; Moffitt, Christine M.

    2003-03-01

    An important emerging issue in fisheries biology is the health of free-ranging populations of fish, particularly with respect to the prevalence of certain pathogens. For many years, pathologists focused on captive populations and interest was in the presence or absence of certain pathogens, so it was economically attractive to test pooled samples of fish. Recently, investigators have begun to study individual fish prevalence from pooled samples. Estimation of disease prevalence from pooled samples is straightforward when assay sensitivity and specificity are perfect, but this assumption is unrealistic. Here we illustrate the use of a Bayesian approach for estimating disease prevalence from pooled samples when sensitivity and specificity are not perfect. We also focus on diagnostic plots to monitor the convergence of the Gibbs-sampling-based Bayesian analysis. The methods are illustrated with a sample data set.

  4. A Bayesian perspective on Markovian dynamics and the fluctuation theorem

    NASA Astrophysics Data System (ADS)

    Virgo, Nathaniel

    2013-08-01

    One of E. T. Jaynes' most important achievements was to derive statistical mechanics from the maximum entropy (MaxEnt) method. I re-examine a relatively new result in statistical mechanics, the Evans-Searles fluctuation theorem, from a MaxEnt perspective. This is done in the belief that interpreting such results in Bayesian terms will lead to new advances in statistical physics. The version of the fluctuation theorem that I will discuss applies to discrete, stochastic systems that begin in a non-equilibrium state and relax toward equilibrium. I will show that for such systems the fluctuation theorem can be seen as a consequence of the fact that the equilibrium distribution must obey the property of detailed balance. Although the principle of detailed balance applies only to equilibrium ensembles, it puts constraints on the form of non-equilibrium trajectories. This will be made clear by taking a novel kind of Bayesian perspective, in which the equilibrium distribution is seen as a prior over the system's set of possible trajectories. Non-equilibrium ensembles are calculated from this prior using Bayes' theorem, with the initial conditions playing the role of the data. I will also comment on the implications of this perspective for the question of how to derive the second law.

  5. The Development of Bayesian Theory and Its Applications in Business and Bioinformatics

    NASA Astrophysics Data System (ADS)

    Zhang, Yifei

    2018-03-01

    Bayesian Theory originated from an Essay of a British mathematician named Thomas Bayes in 1763, and after its development in 20th century, Bayesian Statistics has been taking a significant part in statistical study of all fields. Due to the recent breakthrough of high-dimensional integral, Bayesian Statistics has been improved and perfected, and now it can be used to solve problems that Classical Statistics failed to solve. This paper summarizes Bayesian Statistics’ history, concepts and applications, which are illustrated in five parts: the history of Bayesian Statistics, the weakness of Classical Statistics, Bayesian Theory and its development and applications. The first two parts make a comparison between Bayesian Statistics and Classical Statistics in a macroscopic aspect. And the last three parts focus on Bayesian Theory in specific -- from introducing some particular Bayesian Statistics’ concepts to listing their development and finally their applications.

  6. A computer program for two-dimensional and axisymmetric nonreacting perfect gas and equilibrium chemically reacting laminar, transitional and-or turbulent boundary layer flows

    NASA Technical Reports Server (NTRS)

    Miner, E. W.; Anderson, E. C.; Lewis, C. H.

    1971-01-01

    A computer program is described in detail for laminar, transitional, and/or turbulent boundary-layer flows of non-reacting (perfect gas) and reacting gas mixtures in chemical equilibrium. An implicit finite difference scheme was developed for both two dimensional and axisymmetric flows over bodies, and in rocket nozzles and hypervelocity wind tunnel nozzles. The program, program subroutines, variables, and input and output data are described. Also included is the output from a sample calculation of fully developed turbulent, perfect gas flow over a flat plate. Input data coding forms and a FORTRAN source listing of the program are included. A method is discussed for obtaining thermodynamic and transport property data which are required to perform boundary-layer calculations for reacting gases in chemical equilibrium.

  7. Hepatitis disease detection using Bayesian theory

    NASA Astrophysics Data System (ADS)

    Maseleno, Andino; Hidayati, Rohmah Zahroh

    2017-02-01

    This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.

  8. Quantity Competition in a Differentiated Duopoly

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernanda A.; Ferreira, Flávio; Ferreira, Miguel; Pinto, Alberto A.

    In this paper, we consider a Stackelberg duopoly competition with differentiated goods, linear and symmetric demand and with unknown costs. In our model, the two firms play a non-cooperative game with two stages: in a first stage, firm F 1 chooses the quantity, q 1, that is going to produce; in the second stage, firm F 2 observes the quantity q 1 produced by firm F 1 and chooses its own quantity q 2. Firms choose their output levels in order to maximise their profits. We suppose that each firm has two different technologies, and uses one of them following a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that there is exactly one perfect Bayesian equilibrium for this game. We analyse the variations of the expected profits with the parameters of the model, namely with the parameters of the probability distributions, and with the parameters of the demand and differentiation.

  9. Strong quantum solutions in conflicting-interest Bayesian games

    NASA Astrophysics Data System (ADS)

    Rai, Ashutosh; Paul, Goutam

    2017-10-01

    Quantum entanglement has been recently demonstrated as a useful resource in conflicting-interest games of incomplete information between two players, Alice and Bob [Pappa et al., Phys. Rev. Lett. 114, 020401 (2015), 10.1103/PhysRevLett.114.020401]. The general setting for such games is that of correlated strategies where the correlation between competing players is established through a trusted common adviser; however, players need not reveal their input to the adviser. So far, the quantum advantage in such games has been revealed in a restricted sense. Given a quantum correlated equilibrium strategy, one of the players can still receive a higher than quantum average payoff with some classically correlated equilibrium strategy. In this work, by considering a class of asymmetric Bayesian games, we show the existence of games with quantum correlated equilibrium where the average payoff of both the players exceeds the respective individual maximum for each player over all classically correlated equilibriums.

  10. Competitive Cyber-Insurance and Internet Security

    NASA Astrophysics Data System (ADS)

    Shetty, Nikhil; Schwartz, Galina; Felegyhazi, Mark; Walrand, Jean

    This paper investigates how competitive cyber-insurers affect network security and welfare of the networked society. In our model, a user's probability to incur damage (from being attacked) depends on both his security and the network security, with the latter taken by individual users as given. First, we consider cyberinsurers who cannot observe (and thus, affect) individual user security. This asymmetric information causes moral hazard. Then, for most parameters, no equilibrium exists: the insurance market is missing. Even if an equilibrium exists, the insurance contract covers only a minor fraction of the damage; network security worsens relative to the no-insurance equilibrium. Second, we consider insurers with perfect information about their users' security. Here, user security is perfectly enforceable (zero cost); each insurance contract stipulates the required user security. The unique equilibrium contract covers the entire user damage. Still, for most parameters, network security worsens relative to the no-insurance equilibrium. Although cyber-insurance improves user welfare, in general, competitive cyber-insurers fail to improve network security.

  11. A Mechanical Analogue for Chemical Potential, Extent of Reaction, and the Gibbs Energy.

    ERIC Educational Resources Information Center

    Glass, Samuel V.; DeKock, Roger L.

    1998-01-01

    Presents an analogy that relates the one-dimensional mechanical equilibrium of a rigid block between two Hooke's law springs and the chemical equilibrium of two perfect gases using ordinary materials. (PVD)

  12. Computer program to solve two-dimensional shock-wave interference problems with an equilibrium chemically reacting air model

    NASA Technical Reports Server (NTRS)

    Glass, Christopher E.

    1990-01-01

    The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.

  13. Computer program to solve two-dimensional shock-wave interference problems with an equilibrium chemically reacting air model

    NASA Astrophysics Data System (ADS)

    Glass, Christopher E.

    1990-08-01

    The computer program EASI, an acronym for Equilibrium Air Shock Interference, was developed to calculate the inviscid flowfield, the maximum surface pressure, and the maximum heat flux produced by six shock wave interference patterns on a 2-D, cylindrical configuration. Thermodynamic properties of the inviscid flowfield are determined using either an 11-specie, 7-reaction equilibrium chemically reacting air model or a calorically perfect air model. The inviscid flowfield is solved using the integral form of the conservation equations. Surface heating calculations at the impingement point for the equilibrium chemically reacting air model use variable transport properties and specific heat. However, for the calorically perfect air model, heating rate calculations use a constant Prandtl number. Sample calculations of the six shock wave interference patterns, a listing of the computer program, and flowcharts of the programming logic are included.

  14. Sequential Inverse Problems Bayesian Principles and the Logistic Map Example

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Farmer, Chris L.; Moroz, Irene M.

    2010-09-01

    Bayesian statistics provides a general framework for solving inverse problems, but is not without interpretation and implementation problems. This paper discusses difficulties arising from the fact that forward models are always in error to some extent. Using a simple example based on the one-dimensional logistic map, we argue that, when implementation problems are minimal, the Bayesian framework is quite adequate. In this paper the Bayesian Filter is shown to be able to recover excellent state estimates in the perfect model scenario (PMS) and to distinguish the PMS from the imperfect model scenario (IMS). Through a quantitative comparison of the way in which the observations are assimilated in both the PMS and the IMS scenarios, we suggest that one can, sometimes, measure the degree of imperfection.

  15. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    NASA Astrophysics Data System (ADS)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  16. The Perfect Glass Paradigm: Disordered Hyperuniform Glasses Down to Absolute Zero

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Stillinger, F. H.; Torquato, S.

    2016-11-01

    Rapid cooling of liquids below a certain temperature range can result in a transition to glassy states. The traditional understanding of glasses includes their thermodynamic metastability with respect to crystals. However, here we present specific examples of interactions that eliminate the possibilities of crystalline and quasicrystalline phases, while creating mechanically stable amorphous glasses down to absolute zero temperature. We show that this can be accomplished by introducing a new ideal state of matter called a “perfect glass”. A perfect glass represents a soft-interaction analog of the maximally random jammed (MRJ) packings of hard particles. These latter states can be regarded as the epitome of a glass since they are out of equilibrium, maximally disordered, hyperuniform, mechanically rigid with infinite bulk and shear moduli, and can never crystallize due to configuration-space trapping. Our model perfect glass utilizes two-, three-, and four-body soft interactions while simultaneously retaining the salient attributes of the MRJ state. These models constitute a theoretical proof of concept for perfect glasses and broaden our fundamental understanding of glass physics. A novel feature of equilibrium systems of identical particles interacting with the perfect-glass potential at positive temperature is that they have a non-relativistic speed of sound that is infinite.

  17. The Perfect Glass Paradigm: Disordered Hyperuniform Glasses Down to Absolute Zero.

    PubMed

    Zhang, G; Stillinger, F H; Torquato, S

    2016-11-28

    Rapid cooling of liquids below a certain temperature range can result in a transition to glassy states. The traditional understanding of glasses includes their thermodynamic metastability with respect to crystals. However, here we present specific examples of interactions that eliminate the possibilities of crystalline and quasicrystalline phases, while creating mechanically stable amorphous glasses down to absolute zero temperature. We show that this can be accomplished by introducing a new ideal state of matter called a "perfect glass". A perfect glass represents a soft-interaction analog of the maximally random jammed (MRJ) packings of hard particles. These latter states can be regarded as the epitome of a glass since they are out of equilibrium, maximally disordered, hyperuniform, mechanically rigid with infinite bulk and shear moduli, and can never crystallize due to configuration-space trapping. Our model perfect glass utilizes two-, three-, and four-body soft interactions while simultaneously retaining the salient attributes of the MRJ state. These models constitute a theoretical proof of concept for perfect glasses and broaden our fundamental understanding of glass physics. A novel feature of equilibrium systems of identical particles interacting with the perfect-glass potential at positive temperature is that they have a non-relativistic speed of sound that is infinite.

  18. The Perfect Glass Paradigm: Disordered Hyperuniform Glasses Down to Absolute Zero

    PubMed Central

    Zhang, G.; Stillinger, F. H.; Torquato, S.

    2016-01-01

    Rapid cooling of liquids below a certain temperature range can result in a transition to glassy states. The traditional understanding of glasses includes their thermodynamic metastability with respect to crystals. However, here we present specific examples of interactions that eliminate the possibilities of crystalline and quasicrystalline phases, while creating mechanically stable amorphous glasses down to absolute zero temperature. We show that this can be accomplished by introducing a new ideal state of matter called a “perfect glass”. A perfect glass represents a soft-interaction analog of the maximally random jammed (MRJ) packings of hard particles. These latter states can be regarded as the epitome of a glass since they are out of equilibrium, maximally disordered, hyperuniform, mechanically rigid with infinite bulk and shear moduli, and can never crystallize due to configuration-space trapping. Our model perfect glass utilizes two-, three-, and four-body soft interactions while simultaneously retaining the salient attributes of the MRJ state. These models constitute a theoretical proof of concept for perfect glasses and broaden our fundamental understanding of glass physics. A novel feature of equilibrium systems of identical particles interacting with the perfect-glass potential at positive temperature is that they have a non-relativistic speed of sound that is infinite. PMID:27892452

  19. Stochastic game theory: for playing games, not just for doing theory.

    PubMed

    Goeree, J K; Holt, C A

    1999-09-14

    Recent theoretical advances have dramatically increased the relevance of game theory for predicting human behavior in interactive situations. By relaxing the classical assumptions of perfect rationality and perfect foresight, we obtain much improved explanations of initial decisions, dynamic patterns of learning and adjustment, and equilibrium steady-state distributions.

  20. Modeling Dark Energy Through AN Ising Fluid with Network Interactions

    NASA Astrophysics Data System (ADS)

    Luongo, Orlando; Tommasini, Damiano

    2014-12-01

    We show that the dark energy (DE) effects can be modeled by using an Ising perfect fluid with network interactions, whose low redshift equation of state (EoS), i.e. ω0, becomes ω0 = -1 as in the ΛCDM model. In our picture, DE is characterized by a barotropic fluid on a lattice in the equilibrium configuration. Thus, mimicking the spin interaction by replacing the spin variable with an occupational number, the pressure naturally becomes negative. We find that the corresponding EoS mimics the effects of a variable DE term, whose limiting case reduces to the cosmological constant Λ. This permits us to avoid the introduction of a vacuum energy as DE source by hand, alleviating the coincidence and fine tuning problems. We find fairly good cosmological constraints, by performing three tests with supernovae Ia (SNeIa), baryonic acoustic oscillation (BAO) and cosmic microwave background (CMB) measurements. Finally, we perform the Akaike information criterion (AIC) and Bayesian information criterion (BIC) selection criteria, showing that our model is statistically favored with respect to the Chevallier-Polarsky-Linder (CPL) parametrization.

  1. Optimal firm growth under the threat of entry

    PubMed Central

    Kort, Peter M.; Wrzaczek, Stefan

    2015-01-01

    The paper studies the incumbent-entrant problem in a fully dynamic setting. We find that under an open-loop information structure the incumbent anticipates entry by overinvesting, whereas in the Markov perfect equilibrium the incumbent slightly underinvests in the period before the entry. The entry cost level where entry accommodation passes into entry deterrence is lower in the Markov perfect equilibrium. Further we find that the incumbent’s capital stock level needed to deter entry is hump shaped as a function of the entry time, whereas the corresponding entry cost, where the entrant is indifferent between entry and non-entry, is U-shaped. PMID:26435573

  2. Perfect fluidity of a dissipative system: Analytical solution for the Boltzmann equation in AdS 2 Ⓧ S 2

    DOE PAGES

    Noronha, Jorge; Denicol, Gabriel S.

    2015-12-30

    In this paper we obtain an analytical solution of the relativistic Boltzmann equation under the relaxation time approximation that describes the out-of-equilibrium dynamics of a radially expanding massless gas. This solution is found by mapping this expanding system in flat spacetime to a static flow in the curved spacetime AdS 2 Ⓧ S 2. We further derive explicit analytic expressions for the momentum dependence of the single-particle distribution function as well as for the spatial dependence of its moments. We find that this dissipative system has the ability to flow as a perfect fluid even though its entropy density doesmore » not match the equilibrium form. The nonequilibrium contribution to the entropy density is shown to be due to higher-order scalar moments (which possess no hydrodynamical interpretation) of the Boltzmann equation that can remain out of equilibrium but do not couple to the energy-momentum tensor of the system. Furthermore, in this system the slowly moving hydrodynamic degrees of freedom can exhibit true perfect fluidity while being totally decoupled from the fast moving, nonhydrodynamical microscopic degrees of freedom that lead to entropy production.« less

  3. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    PubMed Central

    Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-01-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629

  4. Combined LAURA-UPS hypersonic solution procedure

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Thompson, Richard A.

    1993-01-01

    A combined solution procedure for hypersonic flowfields around blunted slender bodies was implemented using a thin-layer Navier-Stokes code (LAURA) in the nose region and a parabolized Navier-Stokes code (UPS) on the after body region. Perfect gas, equilibrium air, and non-equilibrium air solutions to sharp cones and a sharp wedge were obtained using UPS alone as a preliminary step. Surface heating rates are presented for two slender bodies with blunted noses, having used LAURA to provide a starting solution to UPS downstream of the sonic line. These are an 8 deg sphere-cone in Mach 5, perfect gas, laminar flow at 0 and 4 deg angles of attack and the Reentry F body at Mach 20, 80,000 ft equilibrium gas conditions for 0 and 0.14 deg angles of attack. The results indicate that this procedure is a timely and accurate method for obtaining aerothermodynamic predictions on slender hypersonic vehicles.

  5. Extracting a Whisper from the DIN: A Bayesian-Inductive Approach to Learning an Anticipatory Model of Cavitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kercel, S.W.

    1999-11-07

    For several reasons, Bayesian parameter estimation is superior to other methods for inductively learning a model for an anticipatory system. Since it exploits prior knowledge, the analysis begins from a more advantageous starting point than other methods. Also, since "nuisance parameters" can be removed from the Bayesian analysis, the description of the model need not be as complete as is necessary for such methods as matched filtering. In the limit of perfectly random noise and a perfect description of the model, the signal-to-noise ratio improves as the square root of the number of samples in the data. Even with themore » imperfections of real-world data, Bayesian methods approach this ideal limit of performance more closely than other methods. These capabilities provide a strategy for addressing a major unsolved problem in pump operation: the identification of precursors of cavitation. Cavitation causes immediate degradation of pump performance and ultimate destruction of the pump. However, the most efficient point to operate a pump is just below the threshold of cavitation. It might be hoped that a straightforward method to minimize pump cavitation damage would be to simply adjust the operating point until the inception of cavitation is detected and then to slightly readjust the operating point to let the cavitation vanish. However, due to the continuously evolving state of the fluid moving through the pump, the threshold of cavitation tends to wander. What is needed is to anticipate cavitation, and this requires the detection and identification of precursor features that occur just before cavitation starts.« less

  6. A Catalog of NASA Special Publications

    DTIC Science & Technology

    1981-01-01

    Properties and Mollier Perfect Carbon Dioxide and Nitrogen Chart for Hydrogen From 300 K to Mixtures 20 000 K Wt’. P. Peterson R. F. Kubin, L. L. Presley...human per- Avail NTIS 1964 formance. NASA SP-3006 Equilibrium Thermodynamic Properties N73-15091 938 pp of Carbon Dioxide Avail GPO 1973 H. E. Bailey...N64-25017 4 27 pp Avail GPO 1964 39 Charts for Equilibrium Flow Properties Equilibrium Thermodynamic Properties of Carbon Dioxide in Hypervelocity of

  7. Cournot competition between a non-profit firm and a for-profit firm with uncertainty

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernanda A.

    2010-03-01

    In this paper, we consider a Cournot competition between a nonprofit firm and a for-profit firm in a homogeneous goods market, with uncertain demand. Given an asymmetric tax schedule, we compute explicitly the Bayesian-Nash equilibrium. Furthermore, we analyze the effects of the tax rate and the degree of altruistic preference on market equilibrium outcomes.

  8. Identifiability of sorption parameters in stirred flow-through reactor experiments and their identification with a Bayesian approach.

    PubMed

    Nicoulaud-Gouin, V; Garcia-Sanchez, L; Giacalone, M; Attard, J C; Martin-Garin, A; Bois, F Y

    2016-10-01

    This paper addresses the methodological conditions -particularly experimental design and statistical inference- ensuring the identifiability of sorption parameters from breakthrough curves measured during stirred flow-through reactor experiments also known as continuous flow stirred-tank reactor (CSTR) experiments. The equilibrium-kinetic (EK) sorption model was selected as nonequilibrium parameterization embedding the K d approach. Parameter identifiability was studied formally on the equations governing outlet concentrations. It was also studied numerically on 6 simulated CSTR experiments on a soil with known equilibrium-kinetic sorption parameters. EK sorption parameters can not be identified from a single breakthrough curve of a CSTR experiment, because K d,1 and k - were diagnosed collinear. For pairs of CSTR experiments, Bayesian inference allowed to select the correct models of sorption and error among sorption alternatives. Bayesian inference was conducted with SAMCAT software (Sensitivity Analysis and Markov Chain simulations Applied to Transfer models) which launched the simulations through the embedded simulation engine GNU-MCSim, and automated their configuration and post-processing. Experimental designs consisting in varying flow rates between experiments reaching equilibrium at contamination stage were found optimal, because they simultaneously gave accurate sorption parameters and predictions. Bayesian results were comparable to maximum likehood method but they avoided convergence problems, the marginal likelihood allowed to compare all models, and credible interval gave directly the uncertainty of sorption parameters θ. Although these findings are limited to the specific conditions studied here, in particular the considered sorption model, the chosen parameter values and error structure, they help in the conception and analysis of future CSTR experiments with radionuclides whose kinetic behaviour is suspected. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Exact Solutions in Three-Dimensional Gravity

    NASA Astrophysics Data System (ADS)

    García-Díaz, Alberto A.

    2017-09-01

    Preface; 1. Introduction; 2. Point particles; 3. Dust solutions; 4. AdS cyclic symmetric stationary solutions; 5. Perfect fluid static stars; 6. Static perfect fluid stars with Λ; 7. Hydrodynamic equilibrium; 8. Stationary perfect fluid with Λ; 9. Friedmann–Robertson–Walker cosmologies; 10. Dilaton-inflaton FRW cosmologies; 11. Einstein–Maxwell solutions; 12. Nonlinear electrodynamics black hole; 13. Dilaton minimally coupled to gravity; 14. Dilaton non-minimally coupled to gravity; 15. Low energy 2+1 string gravity; 16. Topologically massive gravity; 17. Bianchi type spacetimes in TMG; 18. Petrov type N wave metrics; 19. Kundt spacetimes in TMG; 20. Cotton tensor in Riemannian spacetimes; References; Index.

  10. Quantum oligopoly

    NASA Astrophysics Data System (ADS)

    Lo, C. F.; Kiang, D.

    2003-12-01

    Based upon a modification of Li et al.'s "minimal" quantization rules (Phys. Lett. A306(2002) 73), we investigate the quantum version of the Cournot and Bertrand oligopoly. In the Cournot oligopoly, the profit of each of the N firms at the Nash equilibrium point rises monotonically with the measure of the quantum entanglement. Only at maximal entanglement, however, does the Nash equilibrium point coincide with the Pareto optimal point. In the Bertrand case, the Bertrand Paradox remains for finite entanglement (i.e., the perfectly competitive stage is reached for any N>=2), whereas with maximal entanglement each of the N firms will still have a non-zero shared profit. Hence, the Bertrand Paradox is completely resolved. Furthermore, a perfectly competitive market is reached asymptotically for N → ∞ in both the Cournot and Bertrand oligopoly.

  11. Bayesian Analysis of the Power Spectrum of the Cosmic Microwave Background

    NASA Technical Reports Server (NTRS)

    Jewell, Jeffrey B.; Eriksen, H. K.; O'Dwyer, I. J.; Wandelt, B. D.

    2005-01-01

    There is a wealth of cosmological information encoded in the spatial power spectrum of temperature anisotropies of the cosmic microwave background. The sky, when viewed in the microwave, is very uniform, with a nearly perfect blackbody spectrum at 2.7 degrees. Very small amplitude brightness fluctuations (to one part in a million!!) trace small density perturbations in the early universe (roughly 300,000 years after the Big Bang), which later grow through gravitational instability to the large-scale structure seen in redshift surveys... In this talk, I will discuss a Bayesian formulation of this problem; discuss a Gibbs sampling approach to numerically sampling from the Bayesian posterior, and the application of this approach to the first-year data from the Wilkinson Microwave Anisotropy Probe. I will also comment on recent algorithmic developments for this approach to be tractable for the even more massive data set to be returned from the Planck satellite.

  12. Bayesian Atmospheric Radiative Transfer (BART)Thermochemical Equilibrium Abundance (TEA) Code and Application to WASP-43b

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew

    2014-11-01

    We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  13. Bertrand and Cournot oligopolies when rivals' costs are unknown

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernanda A.; Ferreira, Flávio

    2010-10-01

    We study Bertrand and Cournot oligopoly models with incomplete information about rivals' costs, where the uncertainty is given by a uniform distribution. We compute the Bayesian-Nash equilibrium of both games, the ex-ante expected profits and the ex-post profits of each firm. We see that, in the price competition, even though only one firm produces in equilibrium, all firms have a positive ex-ante expected profit.

  14. What Can Reinforcement Learning Teach Us About Non-Equilibrium Quantum Dynamics

    NASA Astrophysics Data System (ADS)

    Bukov, Marin; Day, Alexandre; Sels, Dries; Weinberg, Phillip; Polkovnikov, Anatoli; Mehta, Pankaj

    Equilibrium thermodynamics and statistical physics are the building blocks of modern science and technology. Yet, our understanding of thermodynamic processes away from equilibrium is largely missing. In this talk, I will reveal the potential of what artificial intelligence can teach us about the complex behaviour of non-equilibrium systems. Specifically, I will discuss the problem of finding optimal drive protocols to prepare a desired target state in quantum mechanical systems by applying ideas from Reinforcement Learning [one can think of Reinforcement Learning as the study of how an agent (e.g. a robot) can learn and perfect a given policy through interactions with an environment.]. The driving protocols learnt by our agent suggest that the non-equilibrium world features possibilities easily defying intuition based on equilibrium physics.

  15. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  16. Computational Study of a McDonnell Douglas Single-Stage-to-Orbit Vehicle Concept for Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    1996-01-01

    This paper presents the results of a computational flow analysis of the McDonnell Douglas single-stage-to-orbit vehicle concept designated as the 24U. This study was made to determine the aerodynamic characteristics of the vehicle with and without body flaps over an angle of attack range of 20-40 deg. Computations were made at a flight Mach number of 20 at 200,000 ft. altitude with equilibrium air, and a Mach number of 6 with CF4 gas. The software package FELISA (Finite Element Langley imperial College Sawansea Ames) was used for all the computations. The FELISA software consists of unstructured surface and volume grid generators, and inviscid flow solvers with (1) perfect gas option for subsonic, transonic, and low supersonic speeds, and (2) perfect gas, equilibrium air, and CF4 options for hypersonic speeds. The hypersonic flow solvers with equilibrium air and CF4 options were used in the present studies. Results are compared with other computational results and hypersonic CF4 tunnel test data.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noronha, Jorge; Denicol, Gabriel S.

    In this paper we obtain an analytical solution of the relativistic Boltzmann equation under the relaxation time approximation that describes the out-of-equilibrium dynamics of a radially expanding massless gas. This solution is found by mapping this expanding system in flat spacetime to a static flow in the curved spacetime AdS 2 Ⓧ S 2. We further derive explicit analytic expressions for the momentum dependence of the single-particle distribution function as well as for the spatial dependence of its moments. We find that this dissipative system has the ability to flow as a perfect fluid even though its entropy density doesmore » not match the equilibrium form. The nonequilibrium contribution to the entropy density is shown to be due to higher-order scalar moments (which possess no hydrodynamical interpretation) of the Boltzmann equation that can remain out of equilibrium but do not couple to the energy-momentum tensor of the system. Furthermore, in this system the slowly moving hydrodynamic degrees of freedom can exhibit true perfect fluidity while being totally decoupled from the fast moving, nonhydrodynamical microscopic degrees of freedom that lead to entropy production.« less

  18. Bayesian Retrieval of Complete Posterior PDFs of Oceanic Rain Rate From Microwave Observations

    NASA Technical Reports Server (NTRS)

    Chiu, J. Christine; Petty, Grant W.

    2005-01-01

    This paper presents a new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measurements Mission (TRMM) Microwave Imager (TMI) over the ocean, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes Theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance our understanding of theoretical benefits of the Bayesian approach, we have conducted sensitivity analyses based on two synthetic datasets for which the true conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak, due to saturation effects. It is also suggested that the choice of the estimators and the prior information are both crucial to the retrieval. In addition, the performance of our Bayesian algorithm is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.

  19. Progress Toward an Efficient and General CFD Tool for Propulsion Design/Analysis

    NASA Technical Reports Server (NTRS)

    Cox, C. F.; Cinnella, P.; Westmoreland, S.

    1996-01-01

    The simulation of propulsive flows inherently involves chemical activity. Recent years have seen substantial strides made in the development of numerical schemes for reacting flowfields, in particular those involving finite-rate chemistry. However, finite-rate calculations are computationally intensive and require knowledge of the actual kinetics, which are not always known with sufficient accuracy. Alternatively, flow simulations based on the assumption of local chemical equilibrium are capable of obtaining physically reasonable results at far less computational cost. The present study summarizes the development of efficient numerical techniques for the simulation of flows in local chemical equilibrium, whereby a 'Black Box' chemical equilibrium solver is coupled to the usual gasdynamic equations. The generalization of the methods enables the modelling of any arbitrary mixture of thermally perfect gases, including air, combustion mixtures and plasmas. As demonstration of the potential of the methodologies, several solutions, involving reacting and perfect gas flows, will be presented. Included is a preliminary simulation of the SSME startup transient. Future enhancements to the proposed techniques will be discussed, including more efficient finite-rate and hybrid (partial equilibrium) schemes. The algorithms that have been developed and are being optimized provide for an efficient and general tool for the design and analysis of propulsion systems.

  20. Equilibrium points of the tilted perfect fluid Bianchi VIh state space

    NASA Astrophysics Data System (ADS)

    Apostolopoulos, Pantelis S.

    2005-05-01

    We present the full set of evolution equations for the spatially homogeneous cosmologies of type VIh filled with a tilted perfect fluid and we provide the corresponding equilibrium points of the resulting dynamical state space. It is found that only when the group parameter satisfies h > -1 a self-similar solution exists. In particular we show that for h > -{1/9} there exists a self-similar equilibrium point provided that γ ∈ ({2(3+sqrt{-h})/5+3sqrt{-h}},{3/2}) whereas for h < -{frac 19} the state parameter belongs to the interval γ ∈(1,{2(3+sqrt{-h})/5+3sqrt{-h}}). This family of new exact self-similar solutions belongs to the subclass nαα = 0 having non-zero vorticity. In both cases the equilibrium points have a six-dimensional stable manifold and may act as future attractors at least for the models satisfying nαα = 0. Also we give the exact form of the self-similar metrics in terms of the state and group parameter. As an illustrative example we provide the explicit form of the corresponding self-similar radiation model (γ = {frac 43}), parametrised by the group parameter h. Finally we show that there are no tilted self-similar models of type III and irrotational models of type VIh.

  1. Upwind MacCormack Euler solver with non-equilibrium chemistry

    NASA Technical Reports Server (NTRS)

    Sherer, Scott E.; Scott, James N.

    1993-01-01

    A computer code, designated UMPIRE, is currently under development to solve the Euler equations in two dimensions with non-equilibrium chemistry. UMPIRE employs an explicit MacCormack algorithm with dissipation introduced via Roe's flux-difference split upwind method. The code also has the capability to employ a point-implicit methodology for flows where stiffness is introduced through the chemical source term. A technique consisting of diagonal sweeps across the computational domain from each corner is presented, which is used to reduce storage and execution requirements. Results depicting one dimensional shock tube flow for both calorically perfect gas and thermally perfect, dissociating nitrogen are presented to verify current capabilities of the program. Also, computational results from a chemical reactor vessel with no fluid dynamic effects are presented to check the chemistry capability and to verify the point implicit strategy.

  2. Modified SEAGULL

    NASA Technical Reports Server (NTRS)

    Salas, M. D.; Kuehn, M. S.

    1994-01-01

    Original version of program incorporated into program SRGULL (LEW-15093) for use on National Aero-Space Plane project, its duty being to model forebody, inlet, and nozzle portions of vehicle. However, real-gas chemistry effects in hypersonic flow fields limited accuracy of that version, because it assumed perfect-gas properties. As a result, SEAGULL modified according to real-gas equilibrium-chemistry methodology. This program analyzes two-dimensional, hypersonic flows of real gases. Modified version of SEAGULL maintains as much of original program as possible, and retains ability to execute original perfect-gas version.

  3. The Physics of Protein Crystallization

    NASA Technical Reports Server (NTRS)

    Vekilov, P. G.; Chernov, A. A.

    2002-01-01

    This paper covers review of recent research on protein crystal properties, nucleation, growth and perfection. Mechanical properties of crystals built of molecules strongly exceeding the range of molecular forces are very different from conventional ones. Similar scaling is responsible for specificity of phase equilibrium for macromolecular systems of which thermodynamics is discussed. Nucleation and growth peculiarity and similarity in protein solutions as compared to inorganic solutions is addressed. Hypotheses on why and when microgravity (lack of convection) conditions may result in more perfect crystals are discussed.

  4. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  5. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  6. General Criterion for Harmonicity

    NASA Astrophysics Data System (ADS)

    Proesmans, Karel; Vandebroek, Hans; Van den Broeck, Christian

    2017-10-01

    Inspired by Kubo-Anderson Markov processes, we introduce a new class of transfer matrices whose largest eigenvalue is determined by a simple explicit algebraic equation. Applications include the free energy calculation for various equilibrium systems and a general criterion for perfect harmonicity, i.e., a free energy that is exactly quadratic in the external field. As an illustration, we construct a "perfect spring," namely, a polymer with non-Gaussian, exponentially distributed subunits which, nevertheless, remains harmonic until it is fully stretched. This surprising discovery is confirmed by Monte Carlo and Langevin simulations.

  7. Performance assessment of a Bayesian Forecasting System (BFS) for real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Biondi, D.; De Luca, D. L.

    2013-02-01

    SummaryThe paper evaluates, for a number of flood events, the performance of a Bayesian Forecasting System (BFS), with the aim of evaluating total uncertainty in real-time flood forecasting. The predictive uncertainty of future streamflow is estimated through the Bayesian integration of two separate processors. The former evaluates the propagation of input uncertainty on simulated river discharge, the latter computes the hydrological uncertainty of actual river discharge associated with all other possible sources of error. A stochastic model and a distributed rainfall-runoff model were assumed, respectively, for rainfall and hydrological response simulations. A case study was carried out for a small basin in the Calabria region (southern Italy). The performance assessment of the BFS was performed with adequate verification tools suited for probabilistic forecasts of continuous variables such as streamflow. Graphical tools and scalar metrics were used to evaluate several attributes of the forecast quality of the entire time-varying predictive distributions: calibration, sharpness, accuracy, and continuous ranked probability score (CRPS). Besides the overall system, which incorporates both sources of uncertainty, other hypotheses resulting from the BFS properties were examined, corresponding to (i) a perfect hydrological model; (ii) a non-informative rainfall forecast for predicting streamflow; and (iii) a perfect input forecast. The results emphasize the importance of using different diagnostic approaches to perform comprehensive analyses of predictive distributions, to arrive at a multifaceted view of the attributes of the prediction. For the case study, the selected criteria revealed the interaction of the different sources of error, in particular the crucial role of the hydrological uncertainty processor when compensating, at the cost of wider forecast intervals, for the unreliable and biased predictive distribution resulting from the Precipitation Uncertainty Processor.

  8. Local stability condition of the equilibrium of an oligopoly market with bounded rationality adjustment

    NASA Astrophysics Data System (ADS)

    Ibrahim, Adyda; Saaban, Azizan; Zaibidi, Nerda Zura

    2017-11-01

    This paper considers an n-firm oligopoly market where each firm produces a single homogenous product under a constant unit cost. Nonlinearity is introduced into the model of this oligopoly market by assuming the market has an isoelastic demand function. Furthermore, instead of the usual assumption of perfectly rational firms, they are assumed to be boundedly rational in adjusting their outputs at each period. The equilibrium of this n discrete dimensional system is obtained and its local stability is calculated.

  9. A probable probability distribution of a series nonequilibrium states in a simple system out of equilibrium

    NASA Astrophysics Data System (ADS)

    Gao, Haixia; Li, Ting; Xiao, Changming

    2016-05-01

    When a simple system is in its nonequilibrium state, it will shift to its equilibrium state. Obviously, in this process, there are a series of nonequilibrium states. With the assistance of Bayesian statistics and hyperensemble, a probable probability distribution of these nonequilibrium states can be determined by maximizing the hyperensemble entropy. It is known that the largest probability is the equilibrium state, and the far a nonequilibrium state is away from the equilibrium one, the smaller the probability will be, and the same conclusion can also be obtained in the multi-state space. Furthermore, if the probability stands for the relative time the corresponding nonequilibrium state can stay, then the velocity of a nonequilibrium state returning back to its equilibrium can also be determined through the reciprocal of the derivative of this probability. It tells us that the far away the state from the equilibrium is, the faster the returning velocity will be; if the system is near to its equilibrium state, the velocity will tend to be smaller and smaller, and finally tends to 0 when it gets the equilibrium state.

  10. Bertrand Model Under Incomplete Information

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernanda A.; Pinto, Alberto A.

    2008-09-01

    We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.

  11. Impact of petrophysical uncertainty on Bayesian hydrogeophysical inversion and model selection

    NASA Astrophysics Data System (ADS)

    Brunetti, Carlotta; Linde, Niklas

    2018-01-01

    Quantitative hydrogeophysical studies rely heavily on petrophysical relationships that link geophysical properties to hydrogeological properties and state variables. Coupled inversion studies are frequently based on the questionable assumption that these relationships are perfect (i.e., no scatter). Using synthetic examples and crosshole ground-penetrating radar (GPR) data from the South Oyster Bacterial Transport Site in Virginia, USA, we investigate the impact of spatially-correlated petrophysical uncertainty on inferred posterior porosity and hydraulic conductivity distributions and on Bayes factors used in Bayesian model selection. Our study shows that accounting for petrophysical uncertainty in the inversion (I) decreases bias of the inferred variance of hydrogeological subsurface properties, (II) provides more realistic uncertainty assessment and (III) reduces the overconfidence in the ability of geophysical data to falsify conceptual hydrogeological models.

  12. Game-theoretic equilibrium analysis applications to deregulated electricity markets

    NASA Astrophysics Data System (ADS)

    Joung, Manho

    This dissertation examines game-theoretic equilibrium analysis applications to deregulated electricity markets. In particular, three specific applications are discussed: analyzing the competitive effects of ownership of financial transmission rights, developing a dynamic game model considering the ramp rate constraints of generators, and analyzing strategic behavior in electricity capacity markets. In the financial transmission right application, an investigation is made of how generators' ownership of financial transmission rights may influence the effects of the transmission lines on competition. In the second application, the ramp rate constraints of generators are explicitly modeled using a dynamic game framework, and the equilibrium is characterized as the Markov perfect equilibrium. Finally, the strategic behavior of market participants in electricity capacity markets is analyzed and it is shown that the market participants may exaggerate their available capacity in a Nash equilibrium. It is also shown that the more conservative the independent system operator's capacity procurement, the higher the risk of exaggerated capacity offers.

  13. Laminar or turbulent boundary-layer flows of perfect gases or reacting gas mixtures in chemical equilibrium

    NASA Technical Reports Server (NTRS)

    Anderson, E. C.; Lewis, C. H.

    1971-01-01

    Turbulent boundary layer flows of non-reacting gases are predicted for both interal (nozzle) and external flows. Effects of favorable pressure gradients on two eddy viscosity models were studied in rocket and hypervelocity wind tunnel flows. Nozzle flows of equilibrium air with stagnation temperatures up to 10,000 K were computed. Predictions of equilibrium nitrogen flows through hypervelocity nozzles were compared with experimental data. A slender spherically blunted cone was studied at 70,000 ft altitude and 19,000 ft/sec. in the earth's atmosphere. Comparisons with available experimental data showed good agreement. A computer program was developed and fully documented during this investigation for use by interested individuals.

  14. Characterizing the Nash equilibria of three-player Bayesian quantum games

    NASA Astrophysics Data System (ADS)

    Solmeyer, Neal; Balu, Radhakrishnan

    2017-05-01

    Quantum games with incomplete information can be studied within a Bayesian framework. We analyze games quantized within the EWL framework [Eisert, Wilkens, and Lewenstein, Phys Rev. Lett. 83, 3077 (1999)]. We solve for the Nash equilibria of a variety of two-player quantum games and compare the results to the solutions of the corresponding classical games. We then analyze Bayesian games where there is uncertainty about the player types in two-player conflicting interest games. The solutions to the Bayesian games are found to have a phase diagram-like structure where different equilibria exist in different parameter regions, depending both on the amount of uncertainty and the degree of entanglement. We find that in games where a Pareto-optimal solution is not a Nash equilibrium, it is possible for the quantized game to have an advantage over the classical version. In addition, we analyze the behavior of the solutions as the strategy choices approach an unrestricted operation. We find that some games have a continuum of solutions, bounded by the solutions of a simpler restricted game. A deeper understanding of Bayesian quantum game theory could lead to novel quantum applications in a multi-agent setting.

  15. Capacity choice in a large market.

    PubMed

    Godenhielm, Mats; Kultti, Klaus

    2014-01-01

    We analyze endogenous capacity formation in a large frictional market with perfectly divisible goods. Each seller posts a price and decides on a capacity. The buyers base their decision on which seller to visit on both characteristics. In this setting we determine the conditions for the existence and uniqueness of a symmetric equilibrium. When capacity is unobservable there exists a continuum of equilibria. We show that the "best" of these equilibria leads to the same seller capacities and the same number of trades as the symmetric equilibrium under observable capacity.

  16. Atomic kinetic energy, momentum distribution, and structure of solid neon at zero temperature

    NASA Astrophysics Data System (ADS)

    Cazorla, C.; Boronat, J.

    2008-01-01

    We report on the calculation of the ground-state atomic kinetic energy Ek and momentum distribution of solid Ne by means of the diffusion Monte Carlo method and Aziz HFD-B pair potential. This approach is shown to perform notably for this crystal since we obtain very good agreement with respect to experimental thermodynamic data. Additionally, we study the structural properties of solid Ne at densities near the equilibrium by estimating the radial pair-distribution function, Lindemann’s ratio, and atomic density profile around the positions of the perfect crystalline lattice. Our value for Ek at the equilibrium density is 41.51(6)K , which agrees perfectly with the recent prediction made by Timms , 41(2)K , based on their deep-inelastic neutron scattering experiments carried out over the temperature range 4-20K , and also with previous path integral Monte Carlo results obtained with the Lennard-Jones and Aziz HFD-C2 atomic pairwise interactions. The one-body density function of solid Ne is calculated accurately and found to fit perfectly, within statistical uncertainty, to a Gaussian curve. Furthermore, we analyze the degree of anharmonicity of solid Ne by calculating some of its microscopic ground-state properties within traditional harmonic approaches. We provide insightful comparison to solid He4 in terms of the Debye model in order to assess the relevance of anharmonic effects in Ne.

  17. Bayesian tomography and integrated data analysis in fusion diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dong, E-mail: lid@swip.ac.cn; Dong, Y. B.; Deng, Wei

    2016-11-15

    In this article, a Bayesian tomography method using non-stationary Gaussian process for a prior has been introduced. The Bayesian formalism allows quantities which bear uncertainty to be expressed in the probabilistic form so that the uncertainty of a final solution can be fully resolved from the confidence interval of a posterior probability. Moreover, a consistency check of that solution can be performed by checking whether the misfits between predicted and measured data are reasonably within an assumed data error. In particular, the accuracy of reconstructions is significantly improved by using the non-stationary Gaussian process that can adapt to the varyingmore » smoothness of emission distribution. The implementation of this method to a soft X-ray diagnostics on HL-2A has been used to explore relevant physics in equilibrium and MHD instability modes. This project is carried out within a large size inference framework, aiming at an integrated analysis of heterogeneous diagnostics.« less

  18. Evaporation in Capillary Porous Media at the Perfect Piston-Like Invasion Limit: Evidence of Nonlocal Equilibrium Effects

    NASA Astrophysics Data System (ADS)

    Attari Moghaddam, Alireza; Prat, Marc; Tsotsas, Evangelos; Kharaghani, Abdolreza

    2017-12-01

    The classical continuum modeling of evaporation in capillary porous media is revisited from pore network simulations of the evaporation process. The computed moisture diffusivity is characterized by a minimum corresponding to the transition between liquid and vapor transport mechanisms confirming previous interpretations. Also the study suggests an explanation for the scattering generally observed in the moisture diffusivity obtained from experimental data. The pore network simulations indicate a noticeable nonlocal equilibrium effect leading to a new interpretation of the vapor pressure-saturation relationship classically introduced to obtain the one-equation continuum model of evaporation. The latter should not be understood as a desorption isotherm as classically considered but rather as a signature of a nonlocal equilibrium effect. The main outcome of this study is therefore that nonlocal equilibrium two-equation model must be considered for improving the continuum modeling of evaporation.

  19. On subgame perfect equilibria in quantum Stackelberg duopoly

    NASA Astrophysics Data System (ADS)

    Frąckiewicz, Piotr; Pykacz, Jarosław

    2018-02-01

    Our purpose is to study the Stackelberg duopoly with the use of the Li-Du-Massar quantum duopoly scheme. The result of Lo and Kiang has shown that the correlation of players's quantities caused by the quantum entanglement enlarges the first-mover advantage in the quantum Stackelberg duopoly. However, the interval of entanglement parameters for which this result is valid is bounded from above. It has been an open question what the equilibrium result is over the upper bound, in particular when the entanglement parameter goes to infinity. Our work provides complete analysis of subgame perfect equilibria of the game for all the values of the entanglement parameter.

  20. Swirling flow of a dissociated gas

    NASA Technical Reports Server (NTRS)

    Wolfram, W. R., Jr.; Walker, W. F.

    1975-01-01

    Most physical applications of the swirling flow, defined as a vortex superimposed on an axial flow in the nozzle, involve high temperatures and the possibility of real gas effects. The generalized one-dimensional swirling flow in a converging-diverging nozzle is analyzed for equilibrium and frozen dissociation using the ideal dissociating gas model. Numerical results are provided to illustrate the major effects and to compare with results obtained for a perfect gas with constant ratio of specific heats. It is found that, even in the case of real gases, perfect gas calculations can give a good estimate of the reduction in mass flow due to swirl.

  1. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  2. Retiring the Short-Run Aggregate Supply Curve

    ERIC Educational Resources Information Center

    Elwood, S. Kirk

    2010-01-01

    The author argues that the aggregate demand/aggregate supply (AD/AS) model is significantly improved--although certainly not perfected--by trimming it of the short-run aggregate supply (SRAS) curve. Problems with the SRAS curve are shown first for the AD/AS model that casts the AD curve as identifying the equilibrium level of output associated…

  3. Structure And Efficiency Of Timber Markets

    Treesearch

    Brian C. Murray; Jeffrey P. Prestemon

    2003-01-01

    Perfect competition has long been the standard by which economists have judged the market's ability to achieve an efficient social outcome. The competitive process, unfettered by the imperfections discussed below, forges an outcome in which goods and services are produced at their lowest possible cost, and market equilibrium is achieved at the point at which the...

  4. N-player stochastic differential games

    NASA Technical Reports Server (NTRS)

    Varaiya, P.

    1976-01-01

    The paper presents conditions which guarantee that the control strategies adopted by N players constitute an efficient solution, an equilibrium, or a core solution. The system dynamics are described by an Ito equation, and all players have perfect information. When the set of instantaneous joint costs and velocity vectors is convex, the conditions are necessary.

  5. Plasma equilibrium control during slow plasma current quench with avoidance of plasma-wall interaction in JT-60U

    NASA Astrophysics Data System (ADS)

    Yoshino, R.; Nakamura, Y.; Neyatani, Y.

    1997-08-01

    In JT-60U a vertical displacement event (VDE) is observed during slow plasma current quench (Ip quench) for a vertically elongated divertor plasma with a single null. The VDE is generated by an error in the feedback control of the vertical position of the plasma current centre (ZJ). It has been perfectly avoided by improving the accuracy of the ZJ measurement in real time. Furthermore, plasma-wall interaction has been avoided successfully during slow Ip quench owing to the good performance of the plasma equilibrium control system

  6. Perfection and Entry: An Example,

    DTIC Science & Technology

    1983-07-01

    p(t),a(t)) 0 I t=fi I (2) - t-1 H ([p,a]) = (I - C) I h (p(t),a(t)) E t-1 E The stationary equilibrium outcomes of such a game are those strategy...aSO41 SuOwe seP! JO ague4:pxO a4) aMP)! 03 s! asodind Ja4aIL ye~s leuossajoid sll ol asias e se uoiejodioj) puell a4. Aq partssi ate siaded saiuaS...does not 1). reduce the set of equilibrium outcomes in the discounted gamelJ. The essential features of the market situation required to produce the phe

  7. Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.

  8. Photoacoustic discrimination of vascular and pigmented lesions using classical and Bayesian methods

    NASA Astrophysics Data System (ADS)

    Swearingen, Jennifer A.; Holan, Scott H.; Feldman, Mary M.; Viator, John A.

    2010-01-01

    Discrimination of pigmented and vascular lesions in skin can be difficult due to factors such as size, subungual location, and the nature of lesions containing both melanin and vascularity. Misdiagnosis may lead to precancerous or cancerous lesions not receiving proper medical care. To aid in the rapid and accurate diagnosis of such pathologies, we develop a photoacoustic system to determine the nature of skin lesions in vivo. By irradiating skin with two laser wavelengths, 422 and 530 nm, we induce photoacoustic responses, and the relative response at these two wavelengths indicates whether the lesion is pigmented or vascular. This response is due to the distinct absorption spectrum of melanin and hemoglobin. In particular, pigmented lesions have ratios of photoacoustic amplitudes of approximately 1.4 to 1 at the two wavelengths, while vascular lesions have ratios of about 4.0 to 1. Furthermore, we consider two statistical methods for conducting classification of lesions: standard multivariate analysis classification techniques and a Bayesian-model-based approach. We study 15 human subjects with eight vascular and seven pigmented lesions. Using the classical method, we achieve a perfect classification rate, while the Bayesian approach has an error rate of 20%.

  9. Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard.

    PubMed

    Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A

    2016-03-15

    The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.

  10. N-Player Stochastic Differential Games. [control theory

    NASA Technical Reports Server (NTRS)

    Varaiya, P.

    1974-01-01

    Conditions are described which guarantee that the control strategies adopted by N players constitute an efficient solution, an equilibrium, or a core solution. The system dynamics are described by an Ito equation, and all players have perfect information. It was found that when the set of instantaneous joint costs and velocity vectors is convex, the conditions are necessary.

  11. Vorticity interaction effects on blunt bodies. [hypersonic viscous shock layers

    NASA Technical Reports Server (NTRS)

    Anderson, E. C.; Wilcox, D. C.

    1977-01-01

    Numerical solutions of the viscous shock layer equations governing laminar and turbulent flows of a perfect gas and radiating and nonradiating mixtures of perfect gases in chemical equilibrium are presented for hypersonic flow over spherically blunted cones and hyperboloids. Turbulent properties are described in terms of the classical mixing length. Results are compared with boundary layer and inviscid flowfield solutions; agreement with inviscid flowfield data is satisfactory. Agreement with boundary layer solutions is good except in regions of strong vorticity interaction; in these flow regions, the viscous shock layer solutions appear to be more satisfactory than the boundary layer solutions. Boundary conditions suitable for hypersonic viscous shock layers are devised for an advanced turbulence theory.

  12. Buckling of circular cylindrical shells under dynamically applied axial loads

    NASA Technical Reports Server (NTRS)

    Tulk, J. D.

    1972-01-01

    A theoretical and experimental study was made of the buckling characteristics of perfect and imperfect circular cylindrical shells subjected to dynamic axial loading. Experimental data included dynamic buckling loads (124 data points), high speed photographs of buckling mode shapes and observations of the dynamic stability of shells subjected to rapidly applied sub-critical loads. A mathematical model was developed to describe the dynamic behavior of perfect and imperfect shells. This model was based on the Donnell-Von Karman compatibility and equilibrium equations and had a wall deflection function incorporating five separate modes of deflection. Close agreement between theory and experiment was found for both dynamic buckling strength and buckling mode shapes.

  13. Analysis of a two-dimensional type 6 shock-interference pattern using a perfect-gas code and a real-gas code

    NASA Technical Reports Server (NTRS)

    Bertin, J. J.; Graumann, B. W.

    1973-01-01

    Numerical codes were developed to calculate the two dimensional flow field which results when supersonic flow encounters double wedge configurations whose angles are such that a type 4 pattern occurs. The flow field model included the shock interaction phenomena for a delta wing orbiter. Two numerical codes were developed, one which used the perfect gas relations and a second which incorporated a Mollier table to define equilibrium air properties. The two codes were used to generate theoretical surface pressure and heat transfer distributions for velocities from 3,821 feet per second to an entry condition of 25,000 feet per second.

  14. Transport diffusion in deformed carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Feng, Jiamei; Chen, Peirong; Zheng, Dongqin; Zhong, Weirong

    2018-03-01

    Using non-equilibrium molecular dynamics and Monte Carlo methods, we have studied the transport diffusion of gas in deformed carbon nanotubes. Perfect carbon nanotube and various deformed carbon nanotubes are modeled as transport channels. It is found that the transport diffusion coefficient of gas does not change in twisted carbon nanotubes, but changes in XY-distortion, Z-distortion and local defect carbon nanotubes comparing with that of the perfect carbon nanotube. Furthermore, the change of transport diffusion coefficient is found to be associated with the deformation factor. The relationship between transport diffusion coefficient and temperature is also discussed in this paper. Our results may contribute to understanding the mechanism of molecular transport in nano-channel.

  15. Selfish routing equilibrium in stochastic traffic network: A probability-dominant description.

    PubMed

    Zhang, Wenyi; He, Zhengbing; Guan, Wei; Ma, Rui

    2017-01-01

    This paper suggests a probability-dominant user equilibrium (PdUE) model to describe the selfish routing equilibrium in a stochastic traffic network. At PdUE, travel demands are only assigned to the most dominant routes in the same origin-destination pair. A probability-dominant rerouting dynamic model is proposed to explain the behavioral mechanism of PdUE. To facilitate applications, the logit formula of PdUE is developed, of which a well-designed route set is not indispensable and the equivalent varitional inequality formation is simple. Two routing strategies, i.e., the probability-dominant strategy (PDS) and the dominant probability strategy (DPS), are discussed through a hypothetical experiment. It is found that, whether out of insurance or striving for perfection, PDS is a better choice than DPS. For more general cases, the conducted numerical tests lead to the same conclusion. These imply that PdUE (rather than the conventional stochastic user equilibrium) is a desirable selfish routing equilibrium for a stochastic network, given that the probability distributions of travel time are available to travelers.

  16. Selfish routing equilibrium in stochastic traffic network: A probability-dominant description

    PubMed Central

    Zhang, Wenyi; Guan, Wei; Ma, Rui

    2017-01-01

    This paper suggests a probability-dominant user equilibrium (PdUE) model to describe the selfish routing equilibrium in a stochastic traffic network. At PdUE, travel demands are only assigned to the most dominant routes in the same origin-destination pair. A probability-dominant rerouting dynamic model is proposed to explain the behavioral mechanism of PdUE. To facilitate applications, the logit formula of PdUE is developed, of which a well-designed route set is not indispensable and the equivalent varitional inequality formation is simple. Two routing strategies, i.e., the probability-dominant strategy (PDS) and the dominant probability strategy (DPS), are discussed through a hypothetical experiment. It is found that, whether out of insurance or striving for perfection, PDS is a better choice than DPS. For more general cases, the conducted numerical tests lead to the same conclusion. These imply that PdUE (rather than the conventional stochastic user equilibrium) is a desirable selfish routing equilibrium for a stochastic network, given that the probability distributions of travel time are available to travelers. PMID:28829834

  17. Gaussian process tomography for soft x-ray spectroscopy at WEST without equilibrium information

    NASA Astrophysics Data System (ADS)

    Wang, T.; Mazon, D.; Svensson, J.; Li, D.; Jardin, A.; Verdoolaege, G.

    2018-06-01

    Gaussian process tomography (GPT) is a recently developed tomography method based on the Bayesian probability theory [J. Svensson, JET Internal Report EFDA-JET-PR(11)24, 2011 and Li et al., Rev. Sci. Instrum. 84, 083506 (2013)]. By modeling the soft X-ray (SXR) emissivity field in a poloidal cross section as a Gaussian process, the Bayesian SXR tomography can be carried out in a robust and extremely fast way. Owing to the short execution time of the algorithm, GPT is an important candidate for providing real-time reconstructions with a view to impurity transport and fast magnetohydrodynamic control. In addition, the Bayesian formalism allows quantifying uncertainty on the inferred parameters. In this paper, the GPT technique is validated using a synthetic data set expected from the WEST tokamak, and the results are shown of its application to the reconstruction of SXR emissivity profiles measured on Tore Supra. The method is compared with the standard algorithm based on minimization of the Fisher information.

  18. Forebody and base region real gas flow in severe planetary entry by a factored implicit numerical method. II - Equilibrium reactive gas

    NASA Technical Reports Server (NTRS)

    Davy, W. C.; Green, M. J.; Lombard, C. K.

    1981-01-01

    The factored-implicit, gas-dynamic algorithm has been adapted to the numerical simulation of equilibrium reactive flows. Changes required in the perfect gas version of the algorithm are developed, and the method of coupling gas-dynamic and chemistry variables is discussed. A flow-field solution that approximates a Jovian entry case was obtained by this method and compared with the same solution obtained by HYVIS, a computer program much used for the study of planetary entry. Comparison of surface pressure distribution and stagnation line shock-layer profiles indicates that the two solutions agree well.

  19. Numerical simulation of air hypersonic flows with equilibrium chemical reactions

    NASA Astrophysics Data System (ADS)

    Emelyanov, Vladislav; Karpenko, Anton; Volkov, Konstantin

    2018-05-01

    The finite volume method is applied to solve unsteady three-dimensional compressible Navier-Stokes equations on unstructured meshes. High-temperature gas effects altering the aerodynamics of vehicles are taken into account. Possibilities of the use of graphics processor units (GPUs) for the simulation of hypersonic flows are demonstrated. Solutions of some test cases on GPUs are reported, and a comparison between computational results of equilibrium chemically reacting and perfect air flowfields is performed. Speedup of solution on GPUs with respect to the solution on central processor units (CPUs) is compared. The results obtained provide promising perspective for designing a GPU-based software framework for practical applications.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugliese, D.; Stuchlík, Z., E-mail: d.pugliese.physics@gmail.com, E-mail: zdenek.stuchlik@physics.cz

    We analyze the possibility that several instability points may be formed, due to the Paczyński mechanism of violation of mechanical equilibrium, in the orbiting matter around a supermassive Kerr black hole. We consider a recently proposed model of a ringed accretion disk, made up by several tori (rings) that can be corotating or counter-rotating relative to the Kerr attractor due to the history of the accretion process. Each torus is governed by the general relativistic hydrodynamic Boyer condition of equilibrium configurations of rotating perfect fluids. We prove that the number of the instability points is generally limited and depends onmore » the dimensionless spin of the rotating attractor.« less

  1. Towards ultrafast dynamics with split-pulse X-ray photon correlation spectroscopy at free electron laser sources

    DOE PAGES

    Roseker, W.; Hruszkewycz, S. O.; Lehmkuhler, F.; ...

    2018-04-27

    One of the important challenges in condensed matter science is to understand ultrafast, atomic-scale fluctuations that dictate dynamic processes in equilibrium and non-equilibrium materials. Here, we report an important step towards reaching that goal by using a state-of-the-art perfect crystal based split-and-delay system, capable of splitting individual X-ray pulses and introducing femtosecond to nanosecond time delays. We show the results of an ultrafast hard X-ray photon correlation spectroscopy experiment at LCLS where split X-ray pulses were used to measure the dynamics of gold nanoparticles suspended in hexane. We show how reliable speckle contrast values can be extracted even from verymore » low intensity free electron laser (FEL) speckle patterns by applying maximum likelihood fitting, thus demonstrating the potential of a split-and-delay approach for dynamics measurements at FEL sources. This will enable the characterization of equilibrium and, importantly also reversible non-equilibrium processes in atomically disordered materials.« less

  2. Towards ultrafast dynamics with split-pulse X-ray photon correlation spectroscopy at free electron laser sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roseker, W.; Hruszkewycz, S. O.; Lehmkuhler, F.

    One of the important challenges in condensed matter science is to understand ultrafast, atomic-scale fluctuations that dictate dynamic processes in equilibrium and non-equilibrium materials. Here, we report an important step towards reaching that goal by using a state-of-the-art perfect crystal based split-and-delay system, capable of splitting individual X-ray pulses and introducing femtosecond to nanosecond time delays. We show the results of an ultrafast hard X-ray photon correlation spectroscopy experiment at LCLS where split X-ray pulses were used to measure the dynamics of gold nanoparticles suspended in hexane. We show how reliable speckle contrast values can be extracted even from verymore » low intensity free electron laser (FEL) speckle patterns by applying maximum likelihood fitting, thus demonstrating the potential of a split-and-delay approach for dynamics measurements at FEL sources. This will enable the characterization of equilibrium and, importantly also reversible non-equilibrium processes in atomically disordered materials.« less

  3. Rational and Empirical Play in the Simple Hot Potato Game

    ERIC Educational Resources Information Center

    Butts, Carter T.; Rode, David C.

    2007-01-01

    We define a "hot potato" to be a good that may be traded a finite number of times, but which becomes a bad if and when it can no longer be exchanged. We describe a game involving such goods, and show that non-acceptance is a unique subgame perfect Nash equilibrium for rational egoists. Contrastingly, experiments with human subjects show…

  4. A "Conveyor Belt" Model for the Dynamic Contact Angle

    ERIC Educational Resources Information Center

    Della Volpe, C.; Siboni, S.

    2011-01-01

    The familiar Young contact angle measurement of a liquid at equilibrium on a solid is a fundamental aspect of capillary phenomena. But in the real world it is not so easy to observe it. This is due to the roughness and/or heterogeneity of real surfaces, which typically are not perfectly planar and chemically homogeneous. What can be easily…

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benetti, Micol; Alcaniz, Jailson S.; Landau, Susana J., E-mail: micolbenetti@on.br, E-mail: slandau@df.uba.ar, E-mail: alcaniz@on.br

    The hypothesis of the self-induced collapse of the inflaton wave function was proposed as responsible for the emergence of inhomogeneity and anisotropy at all scales. This proposal was studied within an almost de Sitter space-time approximation for the background, which led to a perfect scale-invariant power spectrum, and also for a quasi-de Sitter background, which allows to distinguish departures from the standard approach due to the inclusion of the collapse hypothesis. In this work we perform a Bayesian model comparison for two different choices of the self-induced collapse in a full quasi-de Sitter expansion scenario. In particular, we analyze themore » possibility of detecting the imprint of these collapse schemes at low multipoles of the anisotropy temperature power spectrum of the Cosmic Microwave Background (CMB) using the most recent data provided by the Planck Collaboration. Our results show that one of the two collapse schemes analyzed provides the same Bayesian evidence of the minimal standard cosmological model ΛCDM, while the other scenario is weakly disfavoured with respect to the standard cosmology.« less

  6. Nonequilibrium Entropy in a Shock

    DOE PAGES

    Margolin, Len G.

    2017-07-19

    In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less

  7. Nonequilibrium Entropy in a Shock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, Len G.

    In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less

  8. PGT: A Statistical Approach to Prediction and Mechanism Design

    NASA Astrophysics Data System (ADS)

    Wolpert, David H.; Bono, James W.

    One of the biggest challenges facing behavioral economics is the lack of a single theoretical framework that is capable of directly utilizing all types of behavioral data. One of the biggest challenges of game theory is the lack of a framework for making predictions and designing markets in a manner that is consistent with the axioms of decision theory. An approach in which solution concepts are distribution-valued rather than set-valued (i.e. equilibrium theory) has both capabilities. We call this approach Predictive Game Theory (or PGT). This paper outlines a general Bayesian approach to PGT. It also presents one simple example to illustrate the way in which this approach differs from equilibrium approaches in both prediction and mechanism design settings.

  9. An approximate viscous shock layer technique for calculating chemically reacting hypersonic flows about blunt-nosed bodies

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Dejarnette, Fred R.

    1991-01-01

    An approximate axisymmetric method was developed which can reliably calculate fully viscous hypersonic flows over blunt nosed bodies. By substituting Maslen's second order pressure expression for the normal momentum equation, a simplified form of the viscous shock layer (VSL) equations is obtained. This approach can solve both the subsonic and supersonic regions of the shock layer without a starting solution for the shock shape. The approach is applicable to perfect gas, equilibrium, and nonequilibrium flowfields. Since the method is fully viscous, the problems associated with a boundary layer solution with an inviscid layer solution are avoided. This procedure is significantly faster than the parabolized Navier-Stokes (PNS) or VSL solvers and would be useful in a preliminary design environment. Problems associated with a previously developed approximate VSL technique are addressed before extending the method to nonequilibrium calculations. Perfect gas (laminar and turbulent), equilibrium, and nonequilibrium solutions were generated for airflows over several analytic body shapes. Surface heat transfer, skin friction, and pressure predictions are comparable to VSL results. In addition, computed heating rates are in good agreement with experimental data. The present technique generates its own shock shape as part of its solution, and therefore could be used to provide more accurate initial shock shapes for higher order procedures which require starting solutions.

  10. Structural polymorphism exhibited by a quasipalindrome present in the locus control region (LCR) of the human beta-globin gene cluster.

    PubMed

    Kaushik, Mahima; Kukreti, Shrikant

    2006-01-01

    Structural polymorphism of DNA is a widely accepted property. A simple addition to this perception has been our recent finding, where a single nucleotide polymorphism (SNP) site present in a quasipalindromic sequence of beta-globin LCR exhibited a hairpin-duplex equilibrium. Our current studies explore that secondary structures adopted by individual complementary strands compete with formation of a perfect duplex. Using gel-electrophoresis, ultraviolet (UV)-thermal denaturation, circular dichroism (CD) techniques, we have demonstrated the structural transitions within a perfect duplex containing 11 bp quasipalindromic stretch (TGGGG(G/C)CCCCA), to hairpins and bulge duplex forms. The extended version of the 11 bp duplex, flanked by 5 bp on both sides also demonstrated conformational equilibrium between duplex and hairpin species. Gel-electrophoresis confirms that the duplex coexists with hairpin and bulge duplex/cruciform species. Further, in CD spectra of duplexes, presence of two overlapping positive peaks at 265 and 285 nm suggest the features of A- as well as B-type DNA conformation and show oligomer concentration dependence, manifested in A --> B transition. This indicates the possibility of an architectural switching at quasipalindromic region between linear duplex to a cruciform structure. Such DNA structural variations are likely to be found in the mechanics of molecular recognition and manipulation by proteins.

  11. Influence of neural adaptation on dynamics and equilibrium state of neural activities in a ring neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2017-12-01

    How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.

  12. Early dynamics of transversally thermalized matter

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Chojnacki, M.; Florkowski, W.

    2008-10-01

    We argue that the idea that the parton system created in relativistic heavy-ion collisions is formed in a state with transverse momenta close to thermodynamic equilibrium and its subsequent dynamics at early times is dominated by pure transverse hydrodynamics of the perfect fluid is compatible with the data collected at RHIC. This scenario of early parton dynamics may help to solve the problem of early equilibration. Quark Matter 2008, Jaipur, India, February 2008.

  13. Analytical approach for collective diffusion: One-dimensional lattice with the nearest neighbor and the next nearest neighbor lateral interactions

    NASA Astrophysics Data System (ADS)

    Tarasenko, Alexander

    2018-01-01

    Diffusion of particles adsorbed on a homogeneous one-dimensional lattice is investigated using a theoretical approach and MC simulations. The analytical dependencies calculated in the framework of approach are tested using the numerical data. The perfect coincidence of the data obtained by these different methods demonstrates that the correctness of the approach based on the theory of the non-equilibrium statistical operator.

  14. Stochastic user equilibrium model with a tradable credit scheme and application in maximizing network reserve capacity

    NASA Astrophysics Data System (ADS)

    Han, Fei; Cheng, Lin

    2017-04-01

    The tradable credit scheme (TCS) outperforms congestion pricing in terms of social equity and revenue neutrality, apart from the same perfect performance on congestion mitigation. This article investigates the effectiveness and efficiency of TCS on enhancing transportation network capacity in a stochastic user equilibrium (SUE) modelling framework. First, the SUE and credit market equilibrium conditions are presented; then an equivalent general SUE model with TCS is established by virtue of two constructed functions, which can be further simplified under a specific probability distribution. To enhance the network capacity by utilizing TCS, a bi-level mathematical programming model is established for the optimal TCS design problem, with the upper level optimization objective maximizing network reserve capacity and lower level being the proposed SUE model. The heuristic sensitivity analysis-based algorithm is developed to solve the bi-level model. Three numerical examples are provided to illustrate the improvement effect of TCS on the network in different scenarios.

  15. Solutions for Reacting and Nonreacting Viscous Shock Layers with Multicomponent Diffusion and Mass Injection. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Moss, J. N.

    1971-01-01

    Numerical solutions are presented for the viscous shocklayer equations where the chemistry is treated as being either frozen, equilibrium, or nonequilibrium. Also the effects of the diffusion model, surface catalyticity, and mass injection on surface transport and flow parameters are considered. The equilibrium calculations for air species using multicomponent: diffusion provide solutions previously unavailable. The viscous shock-layer equations are solved by using an implicit finite-difference scheme. The flow is treated as a mixture of inert and thermally perfect species. Also the flow is assumed to be in vibrational equilibrium. All calculations are for a 45 deg hyperboloid. The flight conditions are those for various altitudes and velocities in the earth's atmosphere. Data are presented showing the effects of the chemical models; diffusion models; surface catalyticity; and mass injection of air, water, and ablation products on heat transfer; skin friction; shock stand-off distance; wall pressure distribution; and tangential velocity, temperature, and species profiles.

  16. Fuel-Mediated Transient Clustering of Colloidal Building Blocks.

    PubMed

    van Ravensteijn, Bas G P; Hendriksen, Wouter E; Eelkema, Rienk; van Esch, Jan H; Kegel, Willem K

    2017-07-26

    Fuel-driven assembly operates under the continuous influx of energy and results in superstructures that exist out of equilibrium. Such dissipative processes provide a route toward structures and transient behavior unreachable by conventional equilibrium self-assembly. Although perfected in biological systems like microtubules, this class of assembly is only sparsely used in synthetic or colloidal analogues. Here, we present a novel colloidal system that shows transient clustering driven by a chemical fuel. Addition of fuel causes an increase in hydrophobicity of the building blocks by actively removing surface charges, thereby driving their aggregation. Depletion of fuel causes reappearance of the charged moieties and leads to disassembly of the formed clusters. This reassures that the system returns to its initial, equilibrium state. By taking advantage of the cyclic nature of our system, we show that clustering can be induced several times by simple injection of new fuel. The fuel-mediated assembly of colloidal building blocks presented here opens new avenues to the complex landscape of nonequilibrium colloidal structures, guided by biological design principles.

  17. Bayesian Atmospheric Radiative Transfer (BART) Code and Application to WASP-43b

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Cubillos, Patricio; Bowman, Oliver; Rojo, Patricio; Stemm, Madison; Lust, Nathaniel B.; Challener, Ryan; Foster, Austin James; Foster, Andrew S.; Blumenthal, Sarah D.; Bruce, Dylan

    2016-01-01

    We present a new open-source Bayesian radiative-transfer framework, Bayesian Atmospheric Radiative Transfer (BART, https://github.com/exosports/BART), and its application to WASP-43b. BART initializes a model for the atmospheric retrieval calculation, generates thousands of theoretical model spectra using parametrized pressure and temperature profiles and line-by-line radiative-transfer calculation, and employs a statistical package to compare the models with the observations. It consists of three self-sufficient modules available to the community under the reproducible-research license, the Thermochemical Equilibrium Abundances module (TEA, https://github.com/dzesmin/TEA, Blecic et al. 2015}, the radiative-transfer module (Transit, https://github.com/exosports/transit), and the Multi-core Markov-chain Monte Carlo statistical module (MCcubed, https://github.com/pcubillos/MCcubed, Cubillos et al. 2015). We applied BART on all available WASP-43b secondary eclipse data from the space- and ground-based observations constraining the temperature-pressure profile and molecular abundances of the dayside atmosphere of WASP-43b. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  18. HAT-P-16b: A Bayesian Atmospheric Retrieval

    NASA Astrophysics Data System (ADS)

    McIntyre, Kathleen; Harrington, Joseph; Blecic, Jasmina; Cubillos, Patricio; Challener, Ryan; Bakos, Gaspar

    2017-10-01

    HAT-P-16b is a hot (equilibrium temperature 1626 ± 40 K, assuming zero Bond albedo and efficient energy redistribution), 4.19 ± 0.09 Jupiter-mass exoplanet orbiting an F8 star every 2.775960 ± 0.000003 days (Buchhave et al 2010). We observed two secondary eclipses of HAT-P-16b using the 3.6 μm and 4.5 μm channels of the Spitzer Space Telescope's Infrared Array Camera (program ID 60003). We applied our Photometry for Orbits, Eclipses, and Transits (POET) code to produce normalized eclipse light curves, and our Bayesian Atmospheric Radiative Transfer (BART) code to constrain the temperature-pressure profiles and atmospheric molecular abundances of the planet. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

  19. Financial Structure and Economic Welfare: Applied General Equilibrium Development Economics.

    PubMed

    Townsend, Robert

    2010-09-01

    This review provides a common framework for researchers thinking about the next generation of micro-founded macro models of growth, inequality, and financial deepening, as well as direction for policy makers targeting microfinance programs to alleviate poverty. Topics include treatment of financial structure general equilibrium models: testing for as-if-complete markets or other financial underpinnings; examining dual-sector models with both a perfectly intermediated sector and a sector in financial autarky, as well as a second generation of these models that embeds information problems and other obstacles to trade; designing surveys to capture measures of income, investment/savings, and flow of funds; and aggregating individuals and households to the level of network, village, or national economy. The review concludes with new directions that overcome conceptual and computational limitations.

  20. Financial Structure and Economic Welfare: Applied General Equilibrium Development Economics

    PubMed Central

    Townsend, Robert

    2010-01-01

    This review provides a common framework for researchers thinking about the next generation of micro-founded macro models of growth, inequality, and financial deepening, as well as direction for policy makers targeting microfinance programs to alleviate poverty. Topics include treatment of financial structure general equilibrium models: testing for as-if-complete markets or other financial underpinnings; examining dual-sector models with both a perfectly intermediated sector and a sector in financial autarky, as well as a second generation of these models that embeds information problems and other obstacles to trade; designing surveys to capture measures of income, investment/savings, and flow of funds; and aggregating individuals and households to the level of network, village, or national economy. The review concludes with new directions that overcome conceptual and computational limitations. PMID:21037939

  1. Assessing population genetic structure via the maximisation of genetic distance

    PubMed Central

    2009-01-01

    Background The inference of the hidden structure of a population is an essential issue in population genetics. Recently, several methods have been proposed to infer population structure in population genetics. Methods In this study, a new method to infer the number of clusters and to assign individuals to the inferred populations is proposed. This approach does not make any assumption on Hardy-Weinberg and linkage equilibrium. The implemented criterion is the maximisation (via a simulated annealing algorithm) of the averaged genetic distance between a predefined number of clusters. The performance of this method is compared with two Bayesian approaches: STRUCTURE and BAPS, using simulated data and also a real human data set. Results The simulations show that with a reduced number of markers, BAPS overestimates the number of clusters and presents a reduced proportion of correct groupings. The accuracy of the new method is approximately the same as for STRUCTURE. Also, in Hardy-Weinberg and linkage disequilibrium cases, BAPS performs incorrectly. In these situations, STRUCTURE and the new method show an equivalent behaviour with respect to the number of inferred clusters, although the proportion of correct groupings is slightly better with the new method. Re-establishing equilibrium with the randomisation procedures improves the precision of the Bayesian approaches. All methods have a good precision for FST ≥ 0.03, but only STRUCTURE estimates the correct number of clusters for FST as low as 0.01. In situations with a high number of clusters or a more complex population structure, MGD performs better than STRUCTURE and BAPS. The results for a human data set analysed with the new method are congruent with the geographical regions previously found. Conclusion This new method used to infer the hidden structure in a population, based on the maximisation of the genetic distance and not taking into consideration any assumption about Hardy-Weinberg and linkage equilibrium, performs well under different simulated scenarios and with real data. Therefore, it could be a useful tool to determine genetically homogeneous groups, especially in those situations where the number of clusters is high, with complex population structure and where Hardy-Weinberg and/or linkage equilibrium are present. PMID:19900278

  2. Stability of the Einstein static universe in Einstein-Cartan theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atazadeh, K., E-mail: atazadeh@azaruniv.ac.ir

    The existence and stability of the Einstein static solution have been built in the Einstein-Cartan gravity. We show that this solution in the presence of perfect fluid with spin density satisfying the Weyssenhoff restriction is cyclically stable around a center equilibrium point. Thus, study of this solution is interesting because it supports non-singular emergent cosmological models in which the early universe oscillates indeterminately about an initial Einstein static solution and is thus past eternal.

  3. Kinetics of nucleation and crystallization in poly(e-caprolactone) (PCL)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuravlev, Evgeny; Schmelzer, Jurn; Wunderlich, Bernhard

    2011-01-01

    The recently developed differential fast scanning calorimetry (DFSC) is used for a new look at the crystal growth of poly(3-caprolactone) (PCL) from 185 K, below the glass transition temperature, to 330 K, close to the equilibrium melting temperature. The DFSC allows temperature control of the sample and determination of its heat capacity using heating rates from 50 to 50,000 K/s. The crystal nucleation and crystallization halftimes were determined simultaneously. The obtained halftimes cover a range from 3 102 s (nucleation at 215 K) to 3 109 s (crystallization at 185 K). After attempting to analyze the experiments with the classicalmore » nucleation and growth model, developed for systems consisting of small molecules, a new methodology is described which addresses the specific problems of crystallization of flexible linear macromolecules. The key problems which are attempted to be resolved concern the differences between the structures of the various entities identified and their specific role in the mechanism of growth. The structures range from configurations having practically unmeasurable latent heats of ordering (nuclei) to being clearly-recognizable, ordered species with rather sharp disordering endotherms in the temperature range from the glass transition to equilibrium melting for increasingly perfect and larger crystals. The mechanisms and kinetics of growth involve also a detailed understanding of the interaction with the surrounding rigid-amorphous fraction (RAF) in dependence of crystal size and perfection.« less

  4. Entropy production in a fluid-solid system far from thermodynamic equilibrium.

    PubMed

    Chung, Bong Jae; Ortega, Blas; Vaidya, Ashwin

    2017-11-24

    The terminal orientation of a rigid body in a moving fluid is an example of a dissipative system, out of thermodynamic equilibrium and therefore a perfect testing ground for the validity of the maximum entropy production principle (MaxEP). Thus far, dynamical equations alone have been employed in studying the equilibrium states in fluid-solid interactions, but these are far too complex and become analytically intractable when inertial effects come into play. At that stage, our only recourse is to rely on numerical techniques which can be computationally expensive. In our past work, we have shown that the MaxEP is a reliable tool to help predict orientational equilibrium states of highly symmetric bodies such as cylinders, spheroids and toroidal bodies. The MaxEP correctly helps choose the stable equilibrium in these cases when the system is slightly out of thermodynamic equilibrium. In the current paper, we expand our analysis to examine i) bodies with fewer symmetries than previously reported, for instance, a half-ellipse and ii) when the system is far from thermodynamic equilibrium. Using two-dimensional numerical studies at Reynolds numbers ranging between 0 and 14, we examine the validity of the MaxEP. Our analysis of flow past a half-ellipse shows that overall the MaxEP is a good predictor of the equilibrium states but, in the special case of the half-ellipse with aspect ratio much greater than unity, the MaxEP is replaced by the Min-MaxEP, at higher Reynolds numbers when inertial effects come into play. Experiments in sedimentation tanks and with hinged bodies in a flow tank confirm these calculations.

  5. Estimation of divergence from Hardy-Weinberg form.

    PubMed

    Stark, Alan E

    2015-08-01

    The Hardy–Weinberg (HW) principle explains how random mating (RM) can produce and maintain a population in equilibrium, that is, with constant genotypic proportions. When proportions diverge from HW form, it is of interest to estimate the fixation index F, which reflects the degree of divergence. Starting from a sample of genotypic counts, a mixed procedure gives first the orthodox estimate of gene frequency q and then a Bayesian estimate of F, based on a credible prior distribution of F, which is described here.

  6. Spherically symmetric Einstein-aether perfect fluid models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coley, Alan A.; Latta, Joey; Leon, Genly

    We investigate spherically symmetric cosmological models in Einstein-aether theory with a tilted (non-comoving) perfect fluid source. We use a 1+3 frame formalism and adopt the comoving aether gauge to derive the evolution equations, which form a well-posed system of first order partial differential equations in two variables. We then introduce normalized variables. The formalism is particularly well-suited for numerical computations and the study of the qualitative properties of the models, which are also solutions of Horava gravity. We study the local stability of the equilibrium points of the resulting dynamical system corresponding to physically realistic inhomogeneous cosmological models and astrophysicalmore » objects with values for the parameters which are consistent with current constraints. In particular, we consider dust models in (β−) normalized variables and derive a reduced (closed) evolution system and we obtain the general evolution equations for the spatially homogeneous Kantowski-Sachs models using appropriate bounded normalized variables. We then analyse these models, with special emphasis on the future asymptotic behaviour for different values of the parameters. Finally, we investigate static models for a mixture of a (necessarily non-tilted) perfect fluid with a barotropic equations of state and a scalar field.« less

  7. Designing lateral spintronic devices with giant tunnel magnetoresistance and perfect spin injection efficiency based on transition metal dichalcogenides.

    PubMed

    Zhao, Pei; Li, Jianwei; Jin, Hao; Yu, Lin; Huang, Baibiao; Ying, Dai

    2018-04-18

    Giant tunnel magnetoresistance (TMR) and perfect spin-injection efficiency (SIE) are extremely significant for modern spintronic devices. Quantum transport properties in a two-dimensional (2D) VS2/MoS2/VS2 magnetic tunneling junction (MTJ) are investigated theoretically within the framework of density functional theory combining with the non-equilibrium Green's functions (DFT-NEGF) method. Our results indicate that the designed MTJ exhibits a TMR with a value up to 4 × 103, which can be used as a switch of spin-electron devices. And due to the huge barrier for spin-down transport, the spin-down electrons could hardly cross the central scattering region, thus achieving a perfect SIE. Furthermore, we also explore for the effect of bias voltage on the TMR and SIE. We find that the TMR increases with the increasing bias voltage, and the SIE is robust against either bias or gate voltage in MTJs, which can serve as effective spin filter devices. Our results can not only give fresh impetus to the research community to build MTJs but also provide potential materials for spintronic devices.

  8. Controllable spin polarization and spin filtering in a zigzag silicene nanoribbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farokhnezhad, Mohsen, E-mail: Mohsen-farokhnezhad@physics.iust.ac.ir; Esmaeilzadeh, Mahdi, E-mail: mahdi@iust.ac.ir; Pournaghavi, Nezhat

    2015-05-07

    Using non-equilibrium Green's function, we study the spin-dependent electron transport properties in a zigzag silicene nanoribbon. To produce and control spin polarization, it is assumed that two ferromagnetic strips are deposited on the both edges of the silicene nanoribbon and an electric field is perpendicularly applied to the nanoribbon plane. The spin polarization is studied for both parallel and anti-parallel configurations of exchange magnetic fields induced by the ferromagnetic strips. We find that complete spin polarization can take place in the presence of perpendicular electric field for anti-parallel configuration and the nanoribbon can work as a perfect spin filter. Themore » spin direction of transmitted electrons can be easily changed from up to down and vice versa by reversing the electric field direction. For parallel configuration, perfect spin filtering can occur even in the absence of electric field. In this case, the spin direction can be changed by changing the electron energy. Finally, we investigate the effects of nonmagnetic Anderson disorder on spin dependent conductance and find that the perfect spin filtering properties of nanoribbon are destroyed by strong disorder, but the nanoribbon retains these properties in the presence of weak disorder.« less

  9. Smarter than others? Conjectures in lowest unique bid auctions.

    PubMed

    Zhou, Cancan; Dong, Hongguang; Hu, Rui; Chen, Qinghua

    2015-01-01

    Research concerning various types of auctions, such as English auctions, Dutch auctions, highest-price sealed-bid auctions, and second-price sealed-bid auctions, is always a topic of considerable interest in interdisciplinary fields. The type of auction, known as a lowest unique bid auction (LUBA), has also attracted significant attention. Various models have been proposed, but they often fail to explain satisfactorily the real bid-distribution characteristics. This paper discusses LUBA bid-distribution characteristics, including the inverted-J shape and the exponential decrease in the upper region. The authors note that this type of distribution, which initially increases and later decreases, cannot be derived from the symmetric Nash equilibrium framework based on perfect information that has previously been used. A novel optimization model based on non-perfect information is presented. The kernel of this model is the premise that agents make decisions to achieve maximum profit based on imaginary information or assumptions regarding the behavior of others.

  10. PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions

    NASA Technical Reports Server (NTRS)

    Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark

    1990-01-01

    Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.

  11. ATP synthase.

    PubMed

    Junge, Wolfgang; Nelson, Nathan

    2015-01-01

    Oxygenic photosynthesis is the principal converter of sunlight into chemical energy. Cyanobacteria and plants provide aerobic life with oxygen, food, fuel, fibers, and platform chemicals. Four multisubunit membrane proteins are involved: photosystem I (PSI), photosystem II (PSII), cytochrome b6f (cyt b6f), and ATP synthase (FOF1). ATP synthase is likewise a key enzyme of cell respiration. Over three billion years, the basic machinery of oxygenic photosynthesis and respiration has been perfected to minimize wasteful reactions. The proton-driven ATP synthase is embedded in a proton tight-coupling membrane. It is composed of two rotary motors/generators, FO and F1, which do not slip against each other. The proton-driven FO and the ATP-synthesizing F1 are coupled via elastic torque transmission. Elastic transmission decouples the two motors in kinetic detail but keeps them perfectly coupled in thermodynamic equilibrium and (time-averaged) under steady turnover. Elastic transmission enables operation with different gear ratios in different organisms.

  12. Spontaneously Flowing Crystal of Self-Propelled Particles

    NASA Astrophysics Data System (ADS)

    Briand, Guillaume; Schindler, Michael; Dauchot, Olivier

    2018-05-01

    We experimentally and numerically study the structure and dynamics of a monodisperse packing of spontaneously aligning self-propelled hard disks. The packings are such that their equilibrium counterparts form perfectly ordered hexagonal structures. Experimentally, we first form a perfect crystal in a hexagonal arena which respects the same crystalline symmetry. Frustration of the hexagonal order, obtained by removing a few particles, leads to the formation of a rapidly diffusing "droplet." Removing more particles, the whole system spontaneously forms a macroscopic sheared flow, while conserving an overall crystalline structure. This flowing crystalline structure, which we call a "rheocrystal," is made possible by the condensation of shear along localized stacking faults. Numerical simulations very well reproduce the experimental observations and allow us to explore the parameter space. They demonstrate that the rheocrystal is induced neither by frustration nor by noise. They further show that larger systems flow faster while still remaining ordered.

  13. Perfect mixing of immiscible macromolecules at fluid interfaces

    NASA Astrophysics Data System (ADS)

    Sheiko, Sergei; Matyjaszewski, Krzysztof; Tsukruk, Vladimir; Carrillo, Jan-Michael; Rubinstein, Michael; Dobrynin, Andrey; Zhou, Jing

    2014-03-01

    Macromolecules typically phase separate unless their shapes and chemical compositions are tailored to explicitly drive mixing. But now our research has shown that physical constraints can drive spontaneous mixing of chemically different species. We have obtained long-range 2D arrays of perfectly mixed macromolecules having a variety of molecular architectures and chemistries, including linear chains, block-copolymer stars, and bottlebrush copolymers with hydrophobic, hydrophilic, and lipophobic chemical compositions. This is achieved by entropy-driven enhancement of steric repulsion between macromolecules anchored on a substrate. By monitoring the kinetics of mixing, we have proved that molecular intercalation is an equilibrium state. The array spacing is controlled by the length of the brush side chains. This entropic templating strategy opens new ways for generating patterns on sub-100 nm length scales with potential application in lithography, directed self-assembly, and biomedical assays. Financial support from the National Science Foundation DMR-0906985, DMR-1004576, DMR-1122483, and DMR-0907515.

  14. Analytical and numerical treatment of resistive drift instability in a plasma slab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirnov, V. V., E-mail: vvmirnov@wisc.edu; Sauppe, J. P.; Hegna, C. C.

    An analytic approach combining the effect of equilibrium diamagnetic flows and the finite ionsound gyroradius associated with electron−ion decoupling and kinetic Alfvén wave dispersion is derived to study resistive drift instabilities in a plasma slab. Linear numerical computations using the NIMROD code are performed with cold ions and hot electrons in a plasma slab with a doubly periodic box bounded by two perfectly conducting walls. A linearly unstable resistive drift mode is observed in computations with a growth rate that is consistent with the analytic dispersion relation. The resistive drift mode is expected to be suppressed by magnetic shear inmore » unbounded domains, but the mode is observed in numerical computations with and without magnetic shear. In the slab model, the finite slab thickness and the perfectly conducting boundary conditions are likely to account for the lack of suppression.« less

  15. Ideal relaxation of the Hopf fibration

    NASA Astrophysics Data System (ADS)

    Smiet, Christopher Berg; Candelaresi, Simon; Bouwmeester, Dirk

    2017-07-01

    Ideal magnetohydrodynamics relaxation is the topology-conserving reconfiguration of a magnetic field into a lower energy state where the net force is zero. This is achieved by modeling the plasma as perfectly conducting viscous fluid. It is an important tool for investigating plasma equilibria and is often used to study the magnetic configurations in fusion devices and astrophysical plasmas. We study the equilibrium reached by a localized magnetic field through the topology conserving relaxation of a magnetic field based on the Hopf fibration in which magnetic field lines are closed circles that are all linked with one another. Magnetic fields with this topology have recently been shown to occur in non-ideal numerical simulations. Our results show that any localized field can only attain equilibrium if there is a finite external pressure, and that for such a field a Taylor state is unattainable. We find an equilibrium plasma configuration that is characterized by a lowered pressure in a toroidal region, with field lines lying on surfaces of constant pressure. Therefore, the field is in a Grad-Shafranov equilibrium. Localized helical magnetic fields are found when plasma is ejected from astrophysical bodies and subsequently relaxes against the background plasma, as well as on earth in plasmoids generated by, e.g., a Marshall gun. This work shows under which conditions an equilibrium can be reached and identifies a toroidal depression as the characteristic feature of such a configuration.

  16. Development of a 3-D upwind PNS code for chemically reacting hypersonic flowfields

    NASA Technical Reports Server (NTRS)

    Tannehill, J. C.; Wadawadigi, G.

    1992-01-01

    Two new parabolized Navier-Stokes (PNS) codes were developed to compute the three-dimensional, viscous, chemically reacting flow of air around hypersonic vehicles such as the National Aero-Space Plane (NASP). The first code (TONIC) solves the gas dynamic and species conservation equations in a fully coupled manner using an implicit, approximately-factored, central-difference algorithm. This code was upgraded to include shock fitting and the capability of computing the flow around complex body shapes. The revised TONIC code was validated by computing the chemically-reacting (M(sub infinity) = 25.3) flow around a 10 deg half-angle cone at various angles of attack and the Ames All-Body model at 0 deg angle of attack. The results of these calculations were in good agreement with the results from the UPS code. One of the major drawbacks of the TONIC code is that the central-differencing of fluxes across interior flowfield discontinuities tends to introduce errors into the solution in the form of local flow property oscillations. The second code (UPS), originally developed for a perfect gas, has been extended to permit either perfect gas, equilibrium air, or nonequilibrium air computations. The code solves the PNS equations using a finite-volume, upwind TVD method based on Roe's approximate Riemann solver that was modified to account for real gas effects. The dissipation term associated with this algorithm is sufficiently adaptive to flow conditions that, even when attempting to capture very strong shock waves, no additional smoothing is required. For nonequilibrium calculations, the code solves the fluid dynamic and species continuity equations in a loosely-coupled manner. This code was used to calculate the hypersonic, laminar flow of chemically reacting air over cones at various angles of attack. In addition, the flow around the McDonnel Douglas generic option blended-wing-body was computed and comparisons were made between the perfect gas, equilibrium air, and the nonequilibrium air results.

  17. Provocative radio transients and base rate bias: A Bayesian argument for conservatism

    NASA Astrophysics Data System (ADS)

    Hair, Thomas W.

    2013-10-01

    Most searches for alien radio transmissions have focused on finding omni-directional or purposefully earth-directed beams of enduring duration. However, most of the interesting signals so far detected have been transient and non-repeatable in nature. These signals could very well be the first data points in an ever-growing data base of such signals used to construct a probabilistic argument for the existence of extraterrestrial intelligence. This paper looks at the effect base rate bias could have on deciding which signals to include in such an archive based upon the likely assumption that our ability to discern natural from artificial signals will be less than perfect.

  18. Equilibrium configurations of a charged fluid around a Kerr black hole

    NASA Astrophysics Data System (ADS)

    Trova, Audrey; Schroven, Kris; Hackmann, Eva; Karas, Vladimír; Kovář, Jiří; Slaný, Petr

    2018-05-01

    Equilibrium configurations of electrically charged perfect fluid surrounding a central rotating black hole endowed with a test electric charge and embedded in a large-scale asymptotically uniform magnetic field are presented. Following our previous studies considering the central black hole to be nonrotating, we show that in the rotating case conditions for the configurations existence change according to the spin of the black hole. We focus our attention on the charged fluid in rigid rotation, which can form toroidal configurations centered in the equatorial plane or the ones hovering above the black hole, along the symmetry axis. We conclude that a nonzero value of spin changes the existence conditions and the morphology of the solutions significantly. In the case of fast rotation, the morphology of the structures is close to an oblate shape.

  19. The charge imbalance in ultracold plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Tianxing; Lu, Ronghua, E-mail: lurh@siom.ac.cn; Guo, Li

    2016-09-15

    Ultracold plasmas are regarded as quasineutral but not strictly neutral. The results of charge imbalance in the expansion of ultracold plasmas are reported. The calculations are performed by a full molecular-dynamics simulation. The details of the electron velocity distributions are calculated without the assumption of electron global thermal equilibrium and Boltzmann distribution. Spontaneous evolutions of the charge imbalance from the initial states with perfect neutrality are given in the simulations. The expansion of outer plasma slows down with the charge imbalance. The influences of plasma size and parameters on the charge imbalance are discussed. The radial profiles of electron temperaturemore » are given for the first time, and the self-similar expansion can still occur even if there is no global thermal equilibrium. The electron disorder induced heating is also found in the simulation.« less

  20. Kolkata Paise Restaurant Problem and the Cyclically Fair Norm

    NASA Astrophysics Data System (ADS)

    Banerjee, Priyodorshi; Mitra, Manipushpak; Mukherjee, Conan

    In this paper we revisit the Kolkata Paise Restaurant problem by allowing for a more general (but common) preference of the n customers defined over the set of n restaurants. This generalization allows for the possibility that each pure strategy Nash equilibrium differs from the Pareto efficient allocation. By assuming that n is small and by allowing for mutual interaction across all customers we design strategies to sustain cyclically fair norm as a sub-game perfect equilibrium of the Kolkata Paise Restaurant problem. We have a cyclically fair norm if n strategically different Pareto efficient strategies are sequentially sustained in a way such that each customer gets serviced in all the n restaurants exactly once between periods 1 and n and then again the same process is repeated between periods (n+1) and 2n and so on.

  1. Bayesian model evidence as a model evaluation metric

    NASA Astrophysics Data System (ADS)

    Guthke, Anneli; Höge, Marvin; Nowak, Wolfgang

    2017-04-01

    When building environmental systems models, we are typically confronted with the questions of how to choose an appropriate model (i.e., which processes to include or neglect) and how to measure its quality. Various metrics have been proposed that shall guide the modeller towards a most robust and realistic representation of the system under study. Criteria for evaluation often address aspects of accuracy (absence of bias) or of precision (absence of unnecessary variance) and need to be combined in a meaningful way in order to address the inherent bias-variance dilemma. We suggest using Bayesian model evidence (BME) as a model evaluation metric that implicitly performs a tradeoff between bias and variance. BME is typically associated with model weights in the context of Bayesian model averaging (BMA). However, it can also be seen as a model evaluation metric in a single-model context or in model comparison. It combines a measure for goodness of fit with a penalty for unjustifiable complexity. Unjustifiable refers to the fact that the appropriate level of model complexity is limited by the amount of information available for calibration. Derived in a Bayesian context, BME naturally accounts for measurement errors in the calibration data as well as for input and parameter uncertainty. BME is therefore perfectly suitable to assess model quality under uncertainty. We will explain in detail and with schematic illustrations what BME measures, i.e. how complexity is defined in the Bayesian setting and how this complexity is balanced with goodness of fit. We will further discuss how BME compares to other model evaluation metrics that address accuracy and precision such as the predictive logscore or other model selection criteria such as the AIC, BIC or KIC. Although computationally more expensive than other metrics or criteria, BME represents an appealing alternative because it provides a global measure of model quality. Even if not applicable to each and every case, we aim at stimulating discussion about how to judge the quality of hydrological models in the presence of uncertainty in general by dissecting the mechanism behind BME.

  2. Out-of-equilibrium protocol for Rényi entropies via the Jarzynski equality.

    PubMed

    Alba, Vincenzo

    2017-06-01

    In recent years entanglement measures, such as the von Neumann and the Rényi entropies, provided a unique opportunity to access elusive features of quantum many-body systems. However, extracting entanglement properties analytically, experimentally, or in numerical simulations can be a formidable task. Here, by combining the replica trick and the Jarzynski equality we devise an alternative effective out-of-equilibrium protocol for measuring the equilibrium Rényi entropies. The key idea is to perform a quench in the geometry of the replicas. The Rényi entropies are obtained as the exponential average of the work performed during the quench. We illustrate an application of the method in classical Monte Carlo simulations, although it could be useful in different contexts, such as in quantum Monte Carlo, or experimentally in cold-atom systems. The method is most effective in the quasistatic regime, i.e., for a slow quench. As a benchmark, we compute the Rényi entropies in the Ising universality class in 1+1 dimensions. We find perfect agreement with the well-known conformal field theory predictions.

  3. Thermal conductance at the interface between crystals using equilibrium and nonequilibrium molecular dynamics

    NASA Astrophysics Data System (ADS)

    Merabia, Samy; Termentzidis, Konstantinos

    2012-09-01

    In this article, we compare the results of nonequilibrium (NEMD) and equilibrium (EMD) molecular dynamics methods to compute the thermal conductance at the interface between solids. We propose to probe the thermal conductance using equilibrium simulations measuring the decay of the thermally induced energy fluctuations of each solid. We also show that NEMD and EMD give generally speaking inconsistent results for the thermal conductance: Green-Kubo simulations probe the Landauer conductance between two solids which assumes phonons on both sides of the interface to be at equilibrium. On the other hand, we show that NEMD give access to the out-of-equilibrium interfacial conductance consistent with the interfacial flux describing phonon transport in each solid. The difference may be large and reaches typically a factor 5 for interfaces between usual semiconductors. We analyze finite size effects for the two determinations of the interfacial thermal conductance, and show that the equilibrium simulations suffer from severe size effects as compared to NEMD. We also compare the predictions of the two above-mentioned methods—EMD and NEMD—regarding the interfacial conductance of a series of mass mismatched Lennard-Jones solids. We show that the Kapitza conductance obtained with EMD can be well described using the classical diffuse mismatch model (DMM). On the other hand, NEMD simulation results are consistent with an out-of-equilibrium generalization of the acoustic mismatch model (AMM). These considerations are important in rationalizing previous results obtained using molecular dynamics, and help in pinpointing the physical scattering mechanisms taking place at atomically perfect interfaces between solids, which is a prerequisite to understand interfacial heat transfer across real interfaces.

  4. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013

    2016-03-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less

  5. Statistical Hierarchy of Varying Speed of Light Cosmologies

    NASA Astrophysics Data System (ADS)

    Salzano, Vincenzo; Da¸browski, Mariusz P.

    2017-12-01

    Many varying speed of light (VSL) theories have been developed recently. Here we address the issue of their observational verification in a fully comprehensive way. By using the most updated cosmological probes, we test three different candidates for a VSL theory (Barrow & Magueijo, Avelino & Martins, and Moffat). We consider many different Ansätze for both the functional form of c(z) and the dark energy dynamics. We compare these results using a reliable statistical tool such as the Bayesian evidence. We find that the present cosmological data are perfectly compatible with any of these VSL scenarios, but for the Moffat model there is a higher Bayesian evidence ratio in favor of VSL rather than the c = constant ΛCDM scenario. Moreover, in such a scenario, the VSL signal can help to strengthen constraints on the spatial curvature (with indication toward an open universe), to clarify some properties of dark energy (exclusion of a cosmological constant at 2σ level), and is also falsifiable in the near future owing to peculiar issues that differentiate this model from the standard one. Finally, we apply an information prior and entropy prior in order to put physical constraints on the models, though still in favor Moffat’s proposal.

  6. Error-in-variables models in calibration

    NASA Astrophysics Data System (ADS)

    Lira, I.; Grientschnig, D.

    2017-12-01

    In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.

  7. Evidence cross-validation and Bayesian inference of MAST plasma equilibria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nessi, G. T. von; Hole, M. J.; Svensson, J.

    2012-01-15

    In this paper, current profiles for plasma discharges on the mega-ampere spherical tokamak are directly calculated from pickup coil, flux loop, and motional-Stark effect observations via methods based in the statistical theory of Bayesian analysis. By representing toroidal plasma current as a series of axisymmetric current beams with rectangular cross-section and inferring the current for each one of these beams, flux-surface geometry and q-profiles are subsequently calculated by elementary application of Biot-Savart's law. The use of this plasma model in the context of Bayesian analysis was pioneered by Svensson and Werner on the joint-European tokamak [Svensson and Werner,Plasma Phys. Controlledmore » Fusion 50(8), 085002 (2008)]. In this framework, linear forward models are used to generate diagnostic predictions, and the probability distribution for the currents in the collection of plasma beams was subsequently calculated directly via application of Bayes' formula. In this work, we introduce a new diagnostic technique to identify and remove outlier observations associated with diagnostics falling out of calibration or suffering from an unidentified malfunction. These modifications enable a good agreement between Bayesian inference of the last-closed flux-surface with other corroborating data, such as that from force balance considerations using EFIT++[Appel et al., ''A unified approach to equilibrium reconstruction'' Proceedings of the 33rd EPS Conference on Plasma Physics (Rome, Italy, 2006)]. In addition, this analysis also yields errors on the plasma current profile and flux-surface geometry as well as directly predicting the Shafranov shift of the plasma core.« less

  8. Serological diagnosis of bovine neosporosis: a Bayesian evaluation of two antibody ELISA tests for in vivo diagnosis in purchased and abortion cattle.

    PubMed

    Roelandt, S; Van der Stede, Y; Czaplicki, G; Van Loo, H; Van Driessche, E; Dewulf, J; Hooyberghs, J; Faes, C

    2015-06-06

    Currently, there are no perfect reference tests for the in vivo detection of Neospora caninum infection. Two commercial N caninum ELISA tests are currently used in Belgium for bovine sera (TEST A and TEST B). The goal of this study is to evaluate these tests used at their current cut-offs, with a no gold standard approach, for the test purpose of (1) demonstration of freedom of infection at purchase and (2) diagnosis in aborting cattle. Sera of two study populations, Abortion population (n=196) and Purchase population (n=514), were selected and tested with both ELISA's. Test results were entered in a Bayesian model with informative priors on population prevalences only (Scenario 1). As sensitivity analysis, two more models were used: one with informative priors on test diagnostic accuracy (Scenario 2) and one with all priors uninformative (Scenario 3). The accuracy parameters were estimated from the first model: diagnostic sensitivity (Test A: 93.54 per cent-Test B: 86.99 per cent) and specificity (Test A: 90.22 per cent-Test B: 90.15 per cent) were high and comparable (Bayesian P values >0.05). Based on predictive values in the two study populations, both tests were fit for purpose, despite an expected false negative fraction of ±0.5 per cent in the Purchase population and ±5 per cent in the Abortion population. In addition, a false positive fraction of ±3 per cent in the overall Purchase population and ±4 per cent in the overall Abortion population was found. British Veterinary Association.

  9. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  10. Reducing greenhouse gas emissions: a duopoly market pricing competition and cooperation under the carbon emissions cap.

    PubMed

    Jian, Ming; He, Hua; Ma, Changsong; Wu, Yan; Yang, Hao

    2017-05-17

    This article studies the price competition and cooperation in a duopoly that is subjected to carbon emissions cap. The study assumes that in a departure from the classical Bertrand game, there is still a market for both firms' goods regardless of the product price, even though production capacity is limited by carbon emissions regulation. Through the decentralized decision making of both firms under perfect information, the results are unstable. The firm with the lower maximum production capacity under carbon emissions regulation and the firm with the higher maximum production capacity both seek market price cooperation. By designing an internal carbon credits trading mechanism, we can ensure that the production capacity of the firm with the higher maximum production capacity under carbon emissions regulation reaches price equilibrium. Also, the negotiation power of the duopoly would affect the price equilibrium.

  11. The period and Q of the Chandler wobble

    NASA Technical Reports Server (NTRS)

    Smith, M. L.; Dahlen, F. A.

    1981-01-01

    The calculation of the theoretical period of the Chandler wobble is extended to account for the non-hydrostatic portion of the earth's equatorial bulge and the effect of the fluid core upon the lengthening of the period due to the pole tide. The theoretical period of a realistic perfectly elastic earth with an equilibrium pole tide is found to be 426.7 sidereal days, which is 8.5 days shorter than the observed period of 435.2 days. Using Rayleigh's principle for a rotating earth, this discrepancy is exploited together with the observed Chandler Q to place constraints on the frequency dependence of mantle anelasticity. In all cases these limits arise from exceeding the 68 percent confidence limits of + or - 2.6 days in the observed period. Since slight departures from an equilibrium pole tide affect the Q much more strongly than the period, these limits are believed to be robust.

  12. Well-balanced schemes for the Euler equations with gravitation: Conservative formulation using global fluxes

    NASA Astrophysics Data System (ADS)

    Chertock, Alina; Cui, Shumo; Kurganov, Alexander; Özcan, Şeyma Nur; Tadmor, Eitan

    2018-04-01

    We develop a second-order well-balanced central-upwind scheme for the compressible Euler equations with gravitational source term. Here, we advocate a new paradigm based on a purely conservative reformulation of the equations using global fluxes. The proposed scheme is capable of exactly preserving steady-state solutions expressed in terms of a nonlocal equilibrium variable. A crucial step in the construction of the second-order scheme is a well-balanced piecewise linear reconstruction of equilibrium variables combined with a well-balanced central-upwind evolution in time, which is adapted to reduce the amount of numerical viscosity when the flow is at (near) steady-state regime. We show the performance of our newly developed central-upwind scheme and demonstrate importance of perfect balance between the fluxes and gravitational forces in a series of one- and two-dimensional examples.

  13. MENTAL DEPRESSION AND KUNDALINI YOGA

    PubMed Central

    Devi, Sanjenbam Kunjeshwori; Chansauria, J. P. N.; Udupa, K.N.

    1986-01-01

    In cases of mental depression, the plasma serotonin, melatonin and glutamate levels are increased along with the lowering of urinary – 5 – hydroxyindole acetic acid, plasma monoamine oxidase and cortisol levels following three and six months Practice of Kundalini Yoga. The pulse rate and blood pressure in these patients are also lowered after Kundalini Yoga practice. Thus, the practice of Kundalini Yoga helps to maintain a perfect homeostasis by bringing an equilibrium between the sympathetic and parasympathetic activities and it can be used as a non – medical measure in treating patients with mental depression. PMID:22557558

  14. Mental depression and kundalini yoga.

    PubMed

    Devi, S K; Chansauria, J P; Udupa, K N

    1986-10-01

    In cases of mental depression, the plasma serotonin, melatonin and glutamate levels are increased along with the lowering of urinary - 5 - hydroxyindole acetic acid, plasma monoamine oxidase and cortisol levels following three and six months Practice of Kundalini Yoga. The pulse rate and blood pressure in these patients are also lowered after Kundalini Yoga practice. Thus, the practice of Kundalini Yoga helps to maintain a perfect homeostasis by bringing an equilibrium between the sympathetic and parasympathetic activities and it can be used as a non - medical measure in treating patients with mental depression.

  15. Dynamic stability of a helicopter with hinged rotor blades

    NASA Technical Reports Server (NTRS)

    Hohenemser, K

    1939-01-01

    The present report is a study of the dynamic stability of a helicopter with hinged rotor blades under hovering conditions. While in this case perfect stability can in general not be obtained it is possible by means of design features to prolong the period of the spontaneous oscillations of the helicopter and reduce their amplification, and so approximately assure neutral equilibrium. The possibility of controlled stability of a helicopter fitted with hinged blades is proved by the successful flights of various helicopters, particularly of the Focker FW61 helicopter.

  16. Special Interests and the Media: Theory and an Application to Climate Change.

    PubMed

    Shapiro, Jesse M

    2016-12-01

    A journalist reports to a voter on an unknown, policy-relevant state. Competing special interests can make claims that contradict the facts but seem credible to the voter. A reputational incentive to avoid taking sides leads the journalist to report special interests' claims to the voter. In equilibrium, the voter can remain uninformed even when the journalist is perfectly informed. Communication is improved if the journalist discloses her partisan leanings. The model provides an account of persistent public ignorance on climate change that is consistent with narrative and quantitative evidence.

  17. Special Interests and the Media

    PubMed Central

    Shapiro, Jesse M.

    2017-01-01

    A journalist reports to a voter on an unknown, policy-relevant state. Competing special interests can make claims that contradict the facts but seem credible to the voter. A reputational incentive to avoid taking sides leads the journalist to report special interests’ claims to the voter. In equilibrium, the voter can remain uninformed even when the journalist is perfectly informed. Communication is improved if the journalist discloses her partisan leanings. The model provides an account of persistent public ignorance on climate change that is consistent with narrative and quantitative evidence. PMID:28725092

  18. Deformation of the free surface of a conducting fluid in the magnetic field of current-carrying linear conductors

    NASA Astrophysics Data System (ADS)

    Zubarev, N. M.; Zubareva, O. V.

    2017-06-01

    The magnetic shaping problem is studied for the situation where a cylindrical column of a perfectly conducting fluid is deformed by the magnetic field of a system of linear current-carrying conductors. Equilibrium is achieved due to the balance of capillary and magnetic pressures. Two two-parametric families of exact solutions of the problem are obtained with the help of conformal mapping technique. In accordance with them, the column essentially deforms in the cross section up to its disintegration.

  19. Mathematical and computational studies of the stability of axisymmetric annular capillary free surfaces

    NASA Technical Reports Server (NTRS)

    Albright, N.; Concus, P.; Karasalo, I.

    1977-01-01

    Of principal interest is the stability of a perfectly wetting liquid in an inverted, vertical, right circular-cylindrical container having a concave spheroidal bottom. The mathematical conditions that the contained liquid be in stable static equilibrium are derived, including those for the limiting case of zero contact angle. Based on these results, a computational investigation is carried out for a particular container that is used for the storage of liquid fuels in NASA Centaur space vehicles, for which the axial ratio of the container bottom is 0.724. It is found that for perfectly wetting liquids the qualitative nature of the onset of instability changes at a critical liquid volume, which for the Centaur fuel tank corresponds to a mean fill level of approximately 0.503 times the tank's radius. Small-amplitude periodic sloshing modes for this tank were calculated; oscillation frequencies or growth rates are given for several Bond numbers and liquid volumes, for normal modes having up to six angular nodes.

  20. An approximate Riemann solver for thermal and chemical nonequilibrium flows

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    1994-01-01

    Among the many methods available for the determination of inviscid fluxes across a surface of discontinuity, the flux-difference-splitting technique that employs Roe-averaged variables has been used extensively by the CFD community because of its simplicity and its ability to capture shocks exactly. This method, originally developed for perfect gas flows, has since been extended to equilibrium as well as nonequilibrium flows. Determination of the Roe-averaged variables for the case of a perfect gas flow is a simple task; however, for thermal and chemical nonequilibrium flows, some of the variables are not uniquely defined. Methods available in the literature to determine these variables seem to lack sound bases. The present paper describes a simple, yet accurate, method to determine all the variables for nonequilibrium flows in the Roe-average state. The basis for this method is the requirement that the Roe-averaged variables form a consistent set of thermodynamic variables. The present method satisfies the requirement that the square of the speed of sound be positive.

  1. Deviation from the law of energy equipartition in a small dynamic-random-access memory

    NASA Astrophysics Data System (ADS)

    Carles, Pierre-Alix; Nishiguchi, Katsuhiko; Fujiwara, Akira

    2015-06-01

    A small dynamic-random-access memory (DRAM) coupled with a high charge sensitivity electrometer based on a silicon field-effect transistor is used to study the law of equipartition of energy. By statistically analyzing the movement of single electrons in the DRAM at various temperature and voltage conditions in thermal equilibrium, we are able to observe a behavior that differs from what is predicted by the law of equipartition energy: when the charging energy of the capacitor of the DRAM is comparable to or smaller than the thermal energy kBT/2, random electron motion is ruled perfectly by thermal energy; on the other hand, when the charging energy becomes higher in relation to the thermal energy kBT/2, random electron motion is suppressed which indicates a deviation from the law of equipartition of energy. Since the law of equipartition is analyzed using the DRAM, one of the most familiar devices, we believe that our results are perfectly universal among all electronic devices.

  2. Equilibrium configurations of perfect fluid orbiting Schwarzschild-de Sitter black holes

    NASA Astrophysics Data System (ADS)

    Stuchlík, Z.; Slaný, P.; Hledík, S.

    2000-11-01

    The hydrodynamical structure of perfect fluid orbiting Schwarzschild-de Sitter black holes is investigated for configurations with uniform distribution of angular momentum density. It is shown that in the black-hole backgrounds admitting the existence of stable circular geodesics, closed equipotential surfaces with a cusp, allowing the existence of toroidal accretion disks, can exist. Two surfaces with a cusp exist for the angular momentum density smaller than the one corresponding to marginally bound circular geodesics; the equipotential surface corresponding to the marginally bound circular orbit has just two cusps. The outer cusp is located nearby the static radius where the gravitational attraction is compensated by the cosmological repulsion. Therefore, due to the presence of a repulsive cosmological constant, the outflow from thick accretion disks can be driven by the same mechanism as the accretion onto the black hole. Moreover, properties of open equipotential surfaces in vicinity of the axis of rotation suggest a strong collimation effects of the repulsive cosmological constant acting on jets produced by the accretion disks.

  3. Technical note: Evaluation of the simultaneous measurements of mesospheric OH, HO2, and O3 under a photochemical equilibrium assumption - a statistical approach

    NASA Astrophysics Data System (ADS)

    Kulikov, Mikhail Y.; Nechaev, Anton A.; Belikovich, Mikhail V.; Ermakova, Tatiana S.; Feigin, Alexander M.

    2018-05-01

    This Technical Note presents a statistical approach to evaluating simultaneous measurements of several atmospheric components under the assumption of photochemical equilibrium. We consider simultaneous measurements of OH, HO2, and O3 at the altitudes of the mesosphere as a specific example and their daytime photochemical equilibrium as an evaluating relationship. A simplified algebraic equation relating local concentrations of these components in the 50-100 km altitude range has been derived. The parameters of the equation are temperature, neutral density, local zenith angle, and the rates of eight reactions. We have performed a one-year simulation of the mesosphere and lower thermosphere using a 3-D chemical-transport model. The simulation shows that the discrepancy between the calculated evolution of the components and the equilibrium value given by the equation does not exceed 3-4 % in the full range of altitudes independent of season or latitude. We have developed a statistical Bayesian evaluation technique for simultaneous measurements of OH, HO2, and O3 based on the equilibrium equation taking into account the measurement error. The first results of the application of the technique to MLS/Aura data (Microwave Limb Sounder) are presented in this Technical Note. It has been found that the satellite data of the HO2 distribution regularly demonstrate lower altitudes of this component's mesospheric maximum. This has also been confirmed by model HO2 distributions and comparison with offline retrieval of HO2 from the daily zonal means MLS radiance.

  4. Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence

    NASA Astrophysics Data System (ADS)

    Lewis, Nicholas; Grünwald, Peter

    2018-03-01

    Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).

  5. Hierarchical Bayesian analysis of outcome- and process-based social preferences and beliefs in Dictator Games and sequential Prisoner's Dilemmas.

    PubMed

    Aksoy, Ozan; Weesie, Jeroen

    2014-05-01

    In this paper, using a within-subjects design, we estimate the utility weights that subjects attach to the outcome of their interaction partners in four decision situations: (1) binary Dictator Games (DG), second player's role in the sequential Prisoner's Dilemma (PD) after the first player (2) cooperated and (3) defected, and (4) first player's role in the sequential Prisoner's Dilemma game. We find that the average weights in these four decision situations have the following order: (1)>(2)>(4)>(3). Moreover, the average weight is positive in (1) but negative in (2), (3), and (4). Our findings indicate the existence of strong negative and small positive reciprocity for the average subject, but there is also high interpersonal variation in the weights in these four nodes. We conclude that the PD frame makes subjects more competitive than the DG frame. Using hierarchical Bayesian modeling, we simultaneously analyze beliefs of subjects about others' utility weights in the same four decision situations. We compare several alternative theoretical models on beliefs, e.g., rational beliefs (Bayesian-Nash equilibrium) and a consensus model. Our results on beliefs strongly support the consensus effect and refute rational beliefs: there is a strong relationship between own preferences and beliefs and this relationship is relatively stable across the four decision situations. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Bayesian inference for the spatio-temporal invasion of alien species.

    PubMed

    Cook, Alex; Marion, Glenn; Butler, Adam; Gibson, Gavin

    2007-08-01

    In this paper we develop a Bayesian approach to parameter estimation in a stochastic spatio-temporal model of the spread of invasive species across a landscape. To date, statistical techniques, such as logistic and autologistic regression, have outstripped stochastic spatio-temporal models in their ability to handle large numbers of covariates. Here we seek to address this problem by making use of a range of covariates describing the bio-geographical features of the landscape. Relative to regression techniques, stochastic spatio-temporal models are more transparent in their representation of biological processes. They also explicitly model temporal change, and therefore do not require the assumption that the species' distribution (or other spatial pattern) has already reached equilibrium as is often the case with standard statistical approaches. In order to illustrate the use of such techniques we apply them to the analysis of data detailing the spread of an invasive plant, Heracleum mantegazzianum, across Britain in the 20th Century using geo-referenced covariate information describing local temperature, elevation and habitat type. The use of Markov chain Monte Carlo sampling within a Bayesian framework facilitates statistical assessments of differences in the suitability of different habitat classes for H. mantegazzianum, and enables predictions of future spread to account for parametric uncertainty and system variability. Our results show that ignoring such covariate information may lead to biased estimates of key processes and implausible predictions of future distributions.

  7. Bayesian modeling of the mass and density of asteroids

    NASA Astrophysics Data System (ADS)

    Dotson, Jessie L.; Mathias, Donovan

    2017-10-01

    Mass and density are two of the fundamental properties of any object. In the case of near earth asteroids, knowledge about the mass of an asteroid is essential for estimating the risk due to (potential) impact and planning possible mitigation options. The density of an asteroid can illuminate the structure of the asteroid. A low density can be indicative of a rubble pile structure whereas a higher density can imply a monolith and/or higher metal content. The damage resulting from an impact of an asteroid with Earth depends on its interior structure in addition to its total mass, and as a result, density is a key parameter to understanding the risk of asteroid impact. Unfortunately, measuring the mass and density of asteroids is challenging and often results in measurements with large uncertainties. In the absence of mass / density measurements for a specific object, understanding the range and distribution of likely values can facilitate probabilistic assessments of structure and impact risk. Hierarchical Bayesian models have recently been developed to investigate the mass - radius relationship of exoplanets (Wolfgang, Rogers & Ford 2016) and to probabilistically forecast the mass of bodies large enough to establish hydrostatic equilibrium over a range of 9 orders of magnitude in mass (from planemos to main sequence stars; Chen & Kipping 2017). Here, we extend this approach to investigate the mass and densities of asteroids. Several candidate Bayesian models are presented, and their performance is assessed relative to a synthetic asteroid population. In addition, a preliminary Bayesian model for probablistically forecasting masses and densities of asteroids is presented. The forecasting model is conditioned on existing asteroid data and includes observational errors, hyper-parameter uncertainties and intrinsic scatter.

  8. Online shaft encoder geometry compensation for arbitrary shaft speed profiles using Bayesian regression

    NASA Astrophysics Data System (ADS)

    Diamond, D. H.; Heyns, P. S.; Oberholster, A. J.

    2016-12-01

    The measurement of instantaneous angular speed is being increasingly investigated for its use in a wide range of condition monitoring and prognostic applications. Central to many measurement techniques are incremental shaft encoders recording the arrival times of shaft angular increments. The conventional approach to processing these signals assumes that the angular increments are equidistant. This assumption is generally incorrect when working with toothed wheels and especially zebra tape encoders and has been shown to introduce errors in the estimated shaft speed. There are some proposed methods in the literature that aim to compensate for this geometric irregularity. Some of the methods require the shaft speed to be perfectly constant for calibration, something rarely achieved in practice. Other methods assume the shaft speed to be nearly constant with minor deviations. Therefore existing methods cannot calibrate the entire shaft encoder geometry for arbitrary shaft speeds. The present article presents a method to calculate the shaft encoder geometry for arbitrary shaft speed profiles. The method uses Bayesian linear regression to calculate the encoder increment distances. The method is derived and then tested against simulated and laboratory experiments. The results indicate that the proposed method is capable of accurately determining the shaft encoder geometry for any shaft speed profile.

  9. Sensitivity and specificity of a hand-held milk electrical conductivity meter compared to the California mastitis test for mastitis in dairy cattle.

    PubMed

    Fosgate, G T; Petzer, I M; Karzis, J

    2013-04-01

    Screening tests for mastitis can play an important role in proactive mastitis control programs. The primary objective of this study was to compare the sensitivity and specificity of milk electrical conductivity (EC) to the California mastitis test (CMT) in commercial dairy cattle in South Africa using Bayesian methods without a perfect reference test. A total of 1848 quarter milk specimens were collected from 173 cows sampled during six sequential farm visits. Of these samples, 25.8% yielded pathogenic bacterial isolates. The most frequently isolated species were coagulase negative Staphylococci (n=346), Streptococcus agalactiae (n=54), and Staphylococcus aureus (n=42). The overall cow-level prevalence of mastitis was 54% based on the Bayesian latent class (BLC) analysis. The CMT was more accurate than EC for classification of cows having somatic cell counts >200,000/mL and for isolation of a bacterial pathogen. BLC analysis also suggested an overall benefit of CMT over EC but the statistical evidence was not strong (P=0.257). The Bayesian model estimated the sensitivity and specificity of EC (measured via resistance) at a cut-point of >25 mΩ/cm to be 89.9% and 86.8%, respectively. The CMT had a sensitivity and specificity of 94.5% and 77.7%, respectively, when evaluated at the weak positive cut-point. EC was useful for identifying milk specimens harbouring pathogens but was not able to differentiate among evaluated bacterial isolates. Screening tests can be used to improve udder health as part of a proactive management plan. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Novel health economic evaluation of a vaccination strategy to prevent HPV-related diseases: the BEST study.

    PubMed

    Favato, Giampiero; Baio, Gianluca; Capone, Alessandro; Marcellusi, Andrea; Costa, Silvano; Garganese, Giorgia; Picardo, Mauro; Drummond, Mike; Jonsson, Bengt; Scambia, Giovanni; Zweifel, Peter; Mennini, Francesco S

    2012-12-01

    The development of human papillomavirus (HPV)-related diseases is not understood perfectly and uncertainties associated with commonly utilized probabilistic models must be considered. The study assessed the cost-effectiveness of a quadrivalent-based multicohort HPV vaccination strategy within a Bayesian framework. A full Bayesian multicohort Markov model was used, in which all unknown quantities were associated with suitable probability distributions reflecting the state of currently available knowledge. These distributions were informed by observed data or expert opinion. The model cycle lasted 1 year, whereas the follow-up time horizon was 90 years. Precancerous cervical lesions, cervical cancers, and anogenital warts were considered as outcomes. The base case scenario (2 cohorts of girls aged 12 and 15 y) and other multicohort vaccination strategies (additional cohorts aged 18 and 25 y) were cost-effective, with a discounted cost per quality-adjusted life-year gained that corresponded to €12,013, €13,232, and €15,890 for vaccination programs based on 2, 3, and 4 cohorts, respectively. With multicohort vaccination strategies, the reduction in the number of HPV-related events occurred earlier (range, 3.8-6.4 y) when compared with a single cohort. The analysis of the expected value of information showed that the results of the model were subject to limited uncertainty (cost per patient = €12.6). This methodological approach is designed to incorporate the uncertainty associated with HPV vaccination. Modeling the cost-effectiveness of a multicohort vaccination program with Bayesian statistics confirmed the value for money of quadrivalent-based HPV vaccination. The expected value of information gave the most appropriate and feasible representation of the true value of this program.

  11. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  12. Sedimentation dynamics and equilibrium profiles in multicomponent mixtures of colloidal particles.

    PubMed

    Spruijt, E; Biesheuvel, P M

    2014-02-19

    In this paper we give a general theoretical framework that describes the sedimentation of multicomponent mixtures of particles with sizes ranging from molecules to macroscopic bodies. Both equilibrium sedimentation profiles and the dynamic process of settling, or its converse, creaming, are modeled. Equilibrium profiles are found to be in perfect agreement with experiments. Our model reconciles two apparently contradicting points of view about buoyancy, thereby resolving a long-lived paradox about the correct choice of the buoyant density. On the one hand, the buoyancy force follows necessarily from the suspension density, as it relates to the hydrostatic pressure gradient. On the other hand, sedimentation profiles of colloidal suspensions can be calculated directly using the fluid density as apparent buoyant density in colloidal systems in sedimentation-diffusion equilibrium (SDE) as a result of balancing gravitational and thermodynamic forces. Surprisingly, this balance also holds in multicomponent mixtures. This analysis resolves the ongoing debate of the correct choice of buoyant density (fluid or suspension): both approaches can be used in their own domain. We present calculations of equilibrium sedimentation profiles and dynamic sedimentation that show the consequences of these insights. In bidisperse mixtures of colloids, particles with a lower mass density than the homogeneous suspension will first cream and then settle, whereas particles with a suspension-matched mass density form transient, bimodal particle distributions during sedimentation, which disappear when equilibrium is reached. In all these cases, the centers of the distributions of the particles with the lowest mass density of the two, regardless of their actual mass, will be located in equilibrium above the so-called isopycnic point, a natural consequence of their hard-sphere interactions. We include these interactions using the Boublik-Mansoori-Carnahan-Starling-Leland (BMCSL) equation of state. Finally, we demonstrate that our model is not limited to hard spheres, by extending it to charged spherical particles, and to dumbbells, trimers and short chains of connected beads.

  13. PEITH(Θ): perfecting experiments with information theory in Python with GPU support.

    PubMed

    Dony, Leander; Mackerodt, Jonas; Ward, Scott; Filippi, Sarah; Stumpf, Michael P H; Liepe, Juliane

    2018-04-01

    Different experiments provide differing levels of information about a biological system. This makes it difficult, a priori, to select one of them beyond mere speculation and/or belief, especially when resources are limited. With the increasing diversity of experimental approaches and general advances in quantitative systems biology, methods that inform us about the information content that a given experiment carries about the question we want to answer, become crucial. PEITH(Θ) is a general purpose, Python framework for experimental design in systems biology. PEITH(Θ) uses Bayesian inference and information theory in order to derive which experiments are most informative in order to estimate all model parameters and/or perform model predictions. https://github.com/MichaelPHStumpf/Peitho. m.stumpf@imperial.ac.uk or juliane.liepe@mpibpc.mpg.de.

  14. Kidnapping model: an extension of Selten's game.

    PubMed

    Iqbal, Azhar; Masson, Virginie; Abbott, Derek

    2017-12-01

    Selten's game is a kidnapping model where the probability of capturing the kidnapper is independent of whether the hostage has been released or executed. Most often, in view of the elevated sensitivities involved, authorities put greater effort and resources into capturing the kidnapper if the hostage has been executed, in contrast with the case when a ransom is paid to secure the hostage's release. In this paper, we study the asymmetric game when the probability of capturing the kidnapper depends on whether the hostage has been executed or not and find a new uniquely determined perfect equilibrium point in Selten's game.

  15. Program Helps To Determine Chemical-Reaction Mechanisms

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Radhakrishnan, K.

    1995-01-01

    General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.

  16. Electron transport in all-Heusler Co2CrSi/Cu2CrAl/Co2CrSi device, based on ab-initio NEGF calculations

    NASA Astrophysics Data System (ADS)

    Mikaeilzadeh, L.; Pirgholi, M.; Tavana, A.

    2018-05-01

    Based on the ab-initio non-equilibrium Green's function (NEGF) formalism based on the density functional theory (DFT), we have studied the electron transport in the all-Heusler device Co2CrSi/Cu2CrAl/Co2CrSi. Results show that the calculated transmission spectra is very sensitive to the structural parameters and the interface. Also, we obtain a range for the thickness of the spacer layer for which the MR effect is optimum. Calculations also show a perfect GMR effect in this device.

  17. Genetic Structure and Diversity of the Endangered Fir Tree of Lebanon (Abies cilicica Carr.): Implications for Conservation

    PubMed Central

    Awad, Lara; Fady, Bruno; Khater, Carla; Roig, Anne; Cheddadi, Rachid

    2014-01-01

    The threatened conifer Abies cilicica currently persists in Lebanon in geographically isolated forest patches. The impact of demographic and evolutionary processes on population genetic diversity and structure were assessed using 10 nuclear microsatellite loci. All remnant 15 local populations revealed a low genetic variation but a high recent effective population size. FST-based measures of population genetic differentiation revealed a low spatial genetic structure, but Bayesian analysis of population structure identified a significant Northeast-Southwest population structure. Populations showed significant but weak isolation-by-distance, indicating non-equilibrium conditions between dispersal and genetic drift. Bayesian assignment tests detected an asymmetric Northeast-Southwest migration involving some long-distance dispersal events. We suggest that the persistence and Northeast-Southwest geographic structure of Abies cilicica in Lebanon is the result of at least two demographic processes during its recent evolutionary history: (1) recent migration to currently marginal populations and (2) local persistence through altitudinal shifts along a mountainous topography. These results might help us better understand the mechanisms involved in the species response to expected climate change. PMID:24587219

  18. Particle creation and non-equilibrium thermodynamical prescription of dark fluids for universe bounded by an event horizon

    NASA Astrophysics Data System (ADS)

    Saha, Subhajit; Biswas, Atreyee; Chakraborty, Subenoy

    2015-03-01

    In the present work, flat FRW model of the universe is considered to be an isolated open thermodynamical system where non-equilibrium prescription has been studied using the mechanism of particle creation. In the perspective of recent observational evidences, the matter distribution in the universe is assumed to be dominated by dark matter and dark energy. The dark matter is chosen as dust while for dark energy, the following choices are considered: (i) Perfect fluid with constant equation of state and (ii) Holographic dark energy. In both the cases, the validity of generalized second law of thermodynamics (GSLT) which states that the total entropy of the fluid as well as that of the horizon should not decrease with the evolution of the universe, has been examined graphically for universe bounded by the event horizon. It is found that GSLT holds in both the cases with some restrictions on the interacting coupling parameter.

  19. Transport of active ellipsoidal particles in ratchet potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ai, Bao-Quan, E-mail: aibq@scnu.edu.cn; Wu, Jian-Chun

    2014-03-07

    Rectified transport of active ellipsoidal particles is numerically investigated in a two-dimensional asymmetric potential. The out-of-equilibrium condition for the active particle is an intrinsic property, which can break thermodynamical equilibrium and induce the directed transport. It is found that the perfect sphere particle can facilitate the rectification, while the needlelike particle destroys the directed transport. There exist optimized values of the parameters (the self-propelled velocity, the torque acting on the body) at which the average velocity takes its maximal value. For the ellipsoidal particle with not large asymmetric parameter, the average velocity decreases with increasing the rotational diffusion rate, whilemore » for the needlelike particle (very large asymmetric parameter), the average velocity is a peaked function of the rotational diffusion rate. By introducing a finite load, particles with different shapes (or different self-propelled velocities) will move to the opposite directions, which is able to separate particles of different shapes (or different self-propelled velocities)« less

  20. Charged particle layers in the Debye limit.

    PubMed

    Golden, Kenneth I; Kalman, Gabor J; Kyrkos, Stamatios

    2002-09-01

    We develop an equivalent of the Debye-Hückel weakly coupled equilibrium theory for layered classical charged particle systems composed of one single charged species. We consider the two most important configurations, the charged particle bilayer and the infinite superlattice. The approach is based on the link provided by the classical fluctuation-dissipation theorem between the random-phase approximation response functions and the Debye equilibrium pair correlation function. Layer-layer pair correlation functions, screened and polarization potentials, static structure functions, and static response functions are calculated. The importance of the perfect screening and compressibility sum rules in determining the overall behavior of the system, especially in the r--> infinity limit, is emphasized. The similarities and differences between the quasi-two-dimensional bilayer and the quasi-three-dimensional superlattice are highlighted. An unexpected behavior that emerges from the analysis is that the screened potential, the correlations, and the screening charges carried by the individual layers exhibit a marked nonmonotonic dependence on the layer separation.

  1. On axisymmetric resistive MHD equilibria with flow free of Pfirsch-Schlüter diffusion

    NASA Astrophysics Data System (ADS)

    Throumoulopoulos, George N.; Tasso, Henri

    2002-11-01

    The equilibrium of an axisymmetric magnetically confined plasma with anisotropic electrical conductivity and flows parallel to the magnetic field is investigated within the framework of the MHD theory by keeping the convective flow term in the momentum equation. It turns out that the stationary states are determined by a second-order partial differential equation for the poloidal magnetic flux function along with a Bernoulli equation for the density identical in form with the respective ideal MHD equations; equilibrium consistent expressions for the conductivities σ_allel and σ_⊥ parallel and perpendicular to the magnetic field are also derived from Ohm's and Faraday's laws. Unlike in the case of stationary states with isotropic conductivity and parallel flows (see [1]) the equilibrium is compatible with non-vanishing poloidal currents. For incompressible flows exact solutions of the above mentioned set of equations can be constructed with σ_allel and σ_⊥ profiles compatible with collisional conductivity profiles, i.e. profiles peaked close to the magnetic axis, vanishing on the boundary and such that σ_allel> σ_⊥. In particular, an exact equilibrium describing a toroidal plasma of arbitrary aspect ratio being contained within a perfectly conducting boundary of rectangular cross-section and peaked toroidal current density profile vanishing on the boundary is further considered. For this equilibrium in the case of vanishing flows the difference σ_allel-σ_⊥ for the reversed field pinch scaling Bp Bt (where Bp and Bt are the poloidal and toroidal magnetic field components) is nearly two times larger than that for the tokamak scaling B_p 0.1 B_t. [1] G. N. Throumoulopoulos, H. Tasso, J. Plasma Physics 64, 601 (2000).

  2. The anisotropic tunneling behavior of spin transport in graphene-based magnetic tunneling junction

    NASA Astrophysics Data System (ADS)

    Pan, Mengchun; Li, Peisen; Qiu, Weicheng; Zhao, Jianqiang; Peng, Junping; Hu, Jiafei; Hu, Jinghua; Tian, Wugang; Hu, Yueguo; Chen, Dixiang; Wu, Xuezhong; Xu, Zhongjie; Yuan, Xuefeng

    2018-05-01

    Due to the theoretical prediction of large tunneling magnetoresistance (TMR), graphene-based magnetic tunneling junction (MTJ) has become an important branch of high-performance spintronics device. In this paper, the non-collinear spin filtering and transport properties of MTJ with the Ni/tri-layer graphene/Ni structure were studied in detail by utilizing the non-equilibrium Green's formalism combined with spin polarized density functional theory. The band structure of Ni-C bonding interface shows that Ni-C atomic hybridization facilitates the electronic structure consistency of graphene and nickel, which results in a perfect spin filtering effect for tri-layer graphene-based MTJ. Furthermore, our theoretical results show that the value of tunneling resistance changes with the relative magnetization angle of two ferromagnetic layers, displaying the anisotropic tunneling behavior of graphene-based MTJ. This originates from the resonant conduction states which are strongly adjusted by the relative magnetization angles. In addition, the perfect spin filtering effect is demonstrated by fitting the anisotropic conductance with the Julliere's model. Our work may serve as guidance for researches and applications of graphene-based spintronics device.

  3. The electronic transport properties of defected bilayer sliding armchair graphene nanoribbons

    NASA Astrophysics Data System (ADS)

    Mohammadi, Amin; Haji-Nasiri, Saeed

    2018-04-01

    By applying non-equilibrium Green's functions (NEGF) in combination with tight-binding (TB) model, we investigate and compare the electronic transport properties of perfect and defected bilayer armchair graphene nanoribbons (BAGNRs) under finite bias. Two typical defects which are placed in the middle of top layer (i.e. single vacancy (SV) and stone wale (SW) defects) are examined. The results reveal that in both perfect and defected bilayers, the maximum current refers to β-AB, AA and α-AB stacking orders, respectively, since the intermolecular interactions are stronger in them. Moreover it is observed that a SV decreases the current in all stacking orders, but the effects of a SW defect is nearly unpredictable. Besides, we introduced a sequential switching behavior and the effects of defects on the switching performance is studied as well. We found that a SW defect can significantly improve the switching behavior of a bilayer system. Transmission spectrum, band structure, molecular energy spectrum and molecular projected self-consistent Hamiltonian (MPSH) are analyzed subsequently to understand the electronic transport properties of these bilayer devices which can be used in developing nano-scale bilayer systems.

  4. Bridging the gap between atomic microstructure and electronic properties of alloys: The case of (In,Ga)N

    NASA Astrophysics Data System (ADS)

    Chan, J. A.; Liu, J. Z.; Zunger, Alex

    2010-07-01

    The atomic microstructure of alloys is rarely perfectly random, instead exhibiting differently shaped precipitates, clusters, zigzag chains, etc. While it is expected that such microstructural features will affect the electronic structures (carrier localization and band gaps), theoretical studies have, until now, been restricted to investigate either perfectly random or artificial “guessed” microstructural features. In this paper, we simulate the alloy microstructures in thermodynamic equilibrium using the static Monte Carlo method and study their electronic structures explicitly using a pseudopotential supercell approach. In this way, we can bridge atomic microstructures with their electronic properties. We derive the atomic microstructures of InGaN using (i) density-functional theory total energies of ˜50 ordered structures to construct a (ii) multibody cluster expansion, including strain effects to which we have applied (iii) static Monte Carlo simulations of systems consisting of over 27000 atoms to determine the equilibrium atomic microstructures. We study two types of alloy thermodynamic behavior: (a) under lattice incoherent conditions, the formation enthalpies are positive and thus the alloy system phase-separates below the miscibility-gap temperature TMG , (b) under lattice coherent conditions, the formation enthalpies can be negative and thus the alloy system exhibits ordering tendency. The microstructure is analyzed in terms of structural motifs (e.g., zigzag chains and InnGa4-nN tetrahedral clusters). The corresponding electronic structure, calculated with the empirical pseudopotentials method, is analyzed in terms of band-edge energies and wave-function localization. We find that the disordered alloys have no electronic localization but significant hole localization, while below the miscibility gap under the incoherent conditions, In-rich precipitates lead to strong electron and hole localization and a reduction in the band gap.

  5. Laminar and turbulent heating predictions for mars entry vehicles

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyong; Yan, Chao; Zheng, Weilin; Zhong, Kang; Geng, Yunfei

    2016-11-01

    Laminar and turbulent heating rates play an important role in the design of Mars entry vehicles. Two distinct gas models, thermochemical non-equilibrium (real gas) model and perfect gas model with specified effective specific heat ratio, are utilized to investigate the aerothermodynamics of Mars entry vehicle named Mars Science Laboratory (MSL). Menter shear stress transport (SST) turbulent model with compressible correction is implemented to take account of the turbulent effect. The laminar and turbulent heating rates of the two gas models are compared and analyzed in detail. The laminar heating rates predicted by the two gas models are nearly the same at forebody of the vehicle, while the turbulent heating environments predicted by the real gas model are severer than the perfect gas model. The difference of specific heat ratio between the two gas models not only induces the flow structure's discrepancy but also increases the heating rates at afterbody of the vehicle obviously. Simple correlations for turbulent heating augmentation in terms of laminar momentum thickness Reynolds number, which can be employed as engineering level design and analysis tools, are also developed from numerical results. At the time of peak heat flux on the +3σ heat load trajectory, the maximum value of momentum thickness Reynolds number at the MSL's forebody is about 500, and the maximum value of turbulent augmentation factor (turbulent heating rates divided by laminar heating rates) is 5 for perfect gas model and 8 for real gas model.

  6. Bayesian Classification Models for Premature Ventricular Contraction Detection on ECG Traces.

    PubMed

    Casas, Manuel M; Avitia, Roberto L; Gonzalez-Navarro, Felix F; Cardenas-Haro, Jose A; Reyna, Marco A

    2018-01-01

    According to the American Heart Association, in its latest commission about Ventricular Arrhythmias and Sudden Death 2006, the epidemiology of the ventricular arrhythmias ranges from a series of risk descriptors and clinical markers that go from ventricular premature complexes and nonsustained ventricular tachycardia to sudden cardiac death due to ventricular tachycardia in patients with or without clinical history. The premature ventricular complexes (PVCs) are known to be associated with malignant ventricular arrhythmias and sudden cardiac death (SCD) cases. Detecting this kind of arrhythmia has been crucial in clinical applications. The electrocardiogram (ECG) is a clinical test used to measure the heart electrical activity for inferences and diagnosis. Analyzing large ECG traces from several thousands of beats has brought the necessity to develop mathematical models that can automatically make assumptions about the heart condition. In this work, 80 different features from 108,653 ECG classified beats of the gold-standard MIT-BIH database were extracted in order to classify the Normal, PVC, and other kind of ECG beats. Three well-known Bayesian classification algorithms were trained and tested using these extracted features. Experimental results show that the F1 scores for each class were above 0.95, giving almost the perfect value for the PVC class. This gave us a promising path in the development of automated mechanisms for the detection of PVC complexes.

  7. Stable Ordering in Langmuir-Blodgett Films

    NASA Astrophysics Data System (ADS)

    Takamoto, Dawn Y.; Aydil, Eray; Zasadzinski, Joseph A.; Ivanova, Ani T.; Schwartz, Daniel K.; Yang, Tinglu; Cremer, Paul S.

    2001-08-01

    Defects in the layering of Langmuir-Blodgett (LB) films can be eliminated by depositing from the appropriate monolayer phase at the air-water interface. LB films deposited from the hexagonal phase of cadmium arachidate (CdA2) at pH 7 spontaneously transform into the bulk soap structure, a centrosymmetric bilayer with an orthorhombic herringbone packing. A large wavelength folding mechanism accelerates the conversion between the two structures, leading to a disruption of the desired layering. At pH > 8.5, though it is more difficult to draw LB films, almost perfect layering is obtained due to the inability to convert from the as-deposited structure to the equilibrium one.

  8. Relativistic fluid dynamics with spin

    NASA Astrophysics Data System (ADS)

    Florkowski, Wojciech; Friman, Bengt; Jaiswal, Amaresh; Speranza, Enrico

    2018-04-01

    Using the conservation laws for charge, energy, momentum, and angular momentum, we derive hydrodynamic equations for the charge density, local temperature, and fluid velocity, as well as for the polarization tensor, starting from local equilibrium distribution functions for particles and antiparticles with spin 1/2. The resulting set of differential equations extends the standard picture of perfect-fluid hydrodynamics with a conserved entropy current in a minimal way. This framework can be used in space-time analyses of the evolution of spin and polarization in various physical systems including high-energy nuclear collisions. We demonstrate that a stationary vortex, which exhibits vorticity-spin alignment, corresponds to a special solution of the spin-hydrodynamical equations.

  9. Highly efficient spin polarizer based on individual heterometallic cubane single-molecule magnets

    NASA Astrophysics Data System (ADS)

    Dong, Damin

    2015-09-01

    The spin-polarized transport across a single-molecule magnet [Mn3Zn(hmp)3O(N3)3(C3H5O2)3].2CHCl3 has been investigated using a density functional theory combined with Keldysh non-equilibrium Green's function formalism. It is shown that this single-molecule magnet has perfect spin filter behaviour. By adsorbing Ni3 cluster onto non-magnetic Au electrode, a large magnetoresistance exceeding 172% is found displaying molecular spin valve feature. Due to the tunneling via discrete quantum-mechanical states, the I-V curve has a stepwise character and negative differential resistance behaviour.

  10. Improved Determination of the Myelin Water Fraction in Human Brain using Magnetic Resonance Imaging through Bayesian Analysis of mcDESPOT

    PubMed Central

    Bouhrara, Mustapha; Spencer, Richard G.

    2015-01-01

    Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian methods as compared to the stochastic region contraction (SRC) implementation of NLLS. PMID:26499810

  11. A survey of eight hot Jupiters in secondary eclipse using WIRCam at CFHT

    NASA Astrophysics Data System (ADS)

    Martioli, Eder; Colón, Knicole D.; Angerhausen, Daniel; Stassun, Keivan G.; Rodriguez, Joseph E.; Zhou, George; Gaudi, B. Scott; Pepper, Joshua; Beatty, Thomas G.; Tata, Ramarao; James, David J.; Eastman, Jason D.; Wilson, Paul Anthony; Bayliss, Daniel; Stevens, Daniel J.

    2018-03-01

    We present near-infrared high-precision photometry for eight transiting hot Jupiters observed during their predicted secondary eclipses. Our observations were carried out using the staring mode of the WIRCam instrument on the Canada-France-Hawaii Telescope (CFHT). We present the observing strategies and data reduction methods which delivered time series photometry with statistical photometric precision as low as 0.11 per cent. We performed a Bayesian analysis to model the eclipse parameters and systematics simultaneously. The measured planet-to-star flux ratios allowed us to constrain the thermal emission from the day side of these hot Jupiters, as we derived the planet brightness temperatures. Our results combined with previously observed eclipses reveal an excess in the brightness temperatures relative to the blackbody prediction for the equilibrium temperatures of the planets for a wide range of heat redistribution factors. We find a trend that this excess appears to be larger for planets with lower equilibrium temperatures. This may imply some additional sources of radiation, such as reflected light from the host star and/or thermal emission from residual internal heat from the formation of the planet.

  12. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion procedure. In each case, the developed model-error approach enables to remove posterior bias and obtain a more realistic characterization of uncertainty.

  13. Late-time behaviour of the tilted Bianchi type VIh models

    NASA Astrophysics Data System (ADS)

    Hervik, S.; van den Hoogen, R. J.; Lim, W. C.; Coley, A. A.

    2007-08-01

    We study tilted perfect fluid cosmological models with a constant equation of state parameter in spatially homogeneous models of Bianchi type VIh using dynamical systems methods and numerical experimentation, with an emphasis on their future asymptotic evolution. We determine all of the equilibrium points of the type VIh state space (which correspond to exact self-similar solutions of the Einstein equations, some of which are new), and their stability is investigated. We find that there are vacuum plane-wave solutions that act as future attractors. In the parameter space, a 'loophole' is shown to exist in which there are no stable equilibrium points. We then show that a Hopf-bifurcation can occur resulting in a stable closed orbit (which we refer to as the Mussel attractor) corresponding to points both inside the loophole and points just outside the loophole; in the former case the closed curves act as late-time attractors while in the latter case these attracting curves will co-exist with attracting equilibrium points. In the special Bianchi type III case, centre manifold theory is required to determine the future attractors. Comprehensive numerical experiments are carried out to complement and confirm the analytical results presented. We note that the Bianchi type VIh case is of particular interest in that it contains many different subcases which exhibit many of the different possible future asymptotic behaviours of Bianchi cosmological models.

  14. A dynamical systems approach to the tilted Bianchi models of solvable type

    NASA Astrophysics Data System (ADS)

    Coley, Alan; Hervik, Sigbjørn

    2005-02-01

    We use a dynamical systems approach to analyse the tilting spatially homogeneous Bianchi models of solvable type (e.g., types VIh and VIIh) with a perfect fluid and a linear barotropic γ-law equation of state. In particular, we study the late-time behaviour of tilted Bianchi models, with an emphasis on the existence of equilibrium points and their stability properties. We briefly discuss the tilting Bianchi type V models and the late-time asymptotic behaviour of irrotational Bianchi type VII0 models. We prove the important result that for non-inflationary Bianchi type VIIh models vacuum plane-wave solutions are the only future attracting equilibrium points in the Bianchi type VIIh invariant set. We then investigate the dynamics close to the plane-wave solutions in more detail, and discover some new features that arise in the dynamical behaviour of Bianchi cosmologies with the inclusion of tilt. We point out that in a tiny open set of parameter space in the type IV model (the loophole) there exist closed curves which act as attracting limit cycles. More interestingly, in the Bianchi type VIIh models there is a bifurcation in which a set of equilibrium points turns into closed orbits. There is a region in which both sets of closed curves coexist, and it appears that for the type VIIh models in this region the solution curves approach a compact surface which is topologically a torus.

  15. Distributed and cooperative task processing: Cournot oligopolies on a graph.

    PubMed

    Pavlic, Theodore P; Passino, Kevin M

    2014-06-01

    This paper introduces a novel framework for the design of distributed agents that must complete externally generated tasks but also can volunteer to process tasks encountered by other agents. To reduce the computational and communication burden of coordination between agents to perfectly balance load around the network, the agents adjust their volunteering propensity asynchronously within a fictitious trading economy. This economy provides incentives for nontrivial levels of volunteering for remote tasks, and thus load is shared. Moreover, the combined effects of diminishing marginal returns and network topology lead to competitive equilibria that have task reallocations that are qualitatively similar to what is expected in a load-balancing system with explicit coordination between nodes. In the paper, topological and algorithmic conditions are given that ensure the existence and uniqueness of a competitive equilibrium. Additionally, a decentralized distributed gradient-ascent algorithm is given that is guaranteed to converge to this equilibrium while not causing any node to over-volunteer beyond its maximum task-processing rate. The framework is applied to an autonomous-air-vehicle example, and connections are drawn to classic studies of the evolution of cooperation in nature.

  16. Managing a Common Pool Resource: Real Time Decision-Making in a Groundwater Aquifer

    NASA Astrophysics Data System (ADS)

    Sahu, R.; McLaughlin, D.

    2017-12-01

    In a Common Pool Resource (CPR) such as a groundwater aquifer, multiple landowners (agents) are competing for a limited resource of water. Landowners pump out the water to grow their own crops. Such problems can be posed as differential games, with agents all trying to control the behavior of the shared dynamic system. Each agent aims to maximize his/her own personal objective like agriculture yield, being aware that the action of every other agent collectively influences the behavior of the shared aquifer. The agents therefore choose a subgame perfect Nash equilibrium strategy that derives an optimal action for each agent based on the current state of the aquifer and assumes perfect information of every other agents' objective function. Furthermore, using an Iterated Best Response approach and interpolating techniques, an optimal pumping strategy can be computed for a more-realistic description of the groundwater model under certain assumptions. The numerical implementation of dynamic optimization techniques for a relevant description of the physical system yields results qualitatively different from the previous solutions obtained from simple abstractions.This work aims to bridge the gap between extensive modeling approaches in hydrology and competitive solution strategies in differential game theory.

  17. Perfect fluid tori orbiting Kehagias-Sfetsos naked singularities

    NASA Astrophysics Data System (ADS)

    Stuchlík, Z.; Pugliese, D.; Schee, J.; Kučáková, H.

    2015-09-01

    We construct perfect fluid tori in the field of the Kehagias-Sfetsos (K-S) naked singularities. These are spherically symmetric vacuum solutions of the modified Hořava quantum gravity, characterized by a dimensionless parameter ω M^2, combining the gravitational mass parameter M of the spacetime with the Hořava parameter ω reflecting the role of the quantum corrections. In dependence on the value of ω M^2, the K-S naked singularities demonstrate a variety of qualitatively different behavior of their circular geodesics that is fully reflected in the properties of the toroidal structures, demonstrating clear distinction to the properties of the torii in the Schwarzschild spacetimes. In all of the K-S naked singularity spacetimes the tori are located above an "antigravity" sphere where matter can stay in a stable equilibrium position, which is relevant for the stability of the orbiting fluid toroidal accretion structures. The signature of the K-S naked singularity is given by the properties of marginally stable tori orbiting with the uniform distribution of the specific angular momentum of the fluid, l= const. In the K-S naked singularity spacetimes with ω M^2 > 0.2811, doubled tori with the same l= const can exist; mass transfer between the outer torus and the inner one is possible under appropriate conditions, while only outflow to the outer space is allowed in complementary conditions. In the K-S spacetimes with ω M^2 < 0.2811, accretion from cusped perfect fluid tori is not possible due to the non-existence of unstable circular geodesics.

  18. EASI - EQUILIBRIUM AIR SHOCK INTERFERENCE

    NASA Technical Reports Server (NTRS)

    Glass, C. E.

    1994-01-01

    New research on hypersonic vehicles, such as the National Aero-Space Plane (NASP), has raised concerns about the effects of shock-wave interference on various structural components of the craft. State-of-the-art aerothermal analysis software is inadequate to predict local flow and heat flux in areas of extremely high heat transfer, such as the surface impingement of an Edney-type supersonic jet. EASI revives and updates older computational methods for calculating inviscid flow field and maximum heating from shock wave interference. The program expands these methods to solve problems involving the six shock-wave interference patterns on a two-dimensional cylindrical leading edge with an equilibrium chemically reacting gas mixture (representing, for example, the scramjet cowl of the NASP). The inclusion of gas chemistry allows for a more accurate prediction of the maximum pressure and heating loads by accounting for the effects of high temperature on the air mixture. Caloric imperfections and specie dissociation of high-temperature air cause shock-wave angles, flow deflection angles, and thermodynamic properties to differ from those calculated by a calorically perfect gas model. EASI contains pressure- and temperature-dependent thermodynamic and transport properties to determine heating rates, and uses either a calorically perfect air model or an 11-specie, 7-reaction reacting air model at equilibrium with temperatures up to 15,000 K for the inviscid flowfield calculations. EASI solves the flow field and the associated maximum surface pressure and heat flux for the six common types of shock wave interference. Depending on the type of interference, the program solves for shock-wave/boundary-layer interaction, expansion-fan/boundary-layer interaction, attaching shear layer or supersonic jet impingement. Heat flux predictions require a knowledge (from experimental data or relevant calculations) of a pertinent length scale of the interaction. Output files contain flow-field information for the various shock-wave interference patterns and their associated maximum surface pressure and heat flux predictions. EASI is written in FORTRAN 77 for a DEC VAX 8500 series computer using the VAX/VMS operating system, and requires 75K of memory. The program is available on a 9-track 1600 BPI magnetic tape in DEC VAX BACKUP format. EASI was developed in 1989. DEC, VAX, and VMS are registered trademarks of the Digital Equipment Corporation.

  19. Existence of three-dimensional ideal-magnetohydrodynamic equilibria with current sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loizu, J.; Princeton Plasma Physics Laboratory, PO Box 451, Princeton, New Jersey 08543; Hudson, S. R.

    2015-09-15

    We consider the linear and nonlinear ideal plasma response to a boundary perturbation in a screw pinch. We demonstrate that three-dimensional, ideal-MHD equilibria with continuously nested flux-surfaces and with discontinuous rotational-transform across the resonant rational-surfaces are well defined and can be computed both perturbatively and using fully nonlinear equilibrium calculations. This rescues the possibility of constructing MHD equilibria with current sheets and continuous, smooth pressure profiles. The results predict that, even if the plasma acts as a perfectly conducting fluid, a resonant magnetic perturbation can penetrate all the way into the center of a tokamak without being shielded at themore » resonant surface.« less

  20. A Pseudo-Temporal Multi-Grid Relaxation Scheme for Solving the Parabolized Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    White, J. A.; Morrison, J. H.

    1999-01-01

    A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high aspect ratio grids.

  1. Testing and selection of cosmological models with (1+z){sup 6} corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szydlowski, Marek; Marc Kac Complex Systems Research Centre, Jagiellonian University, ul. Reymonta 4, 30-059 Cracow; Godlowski, Wlodzimierz

    2008-02-15

    In the paper we check whether the contribution of (-)(1+z){sup 6} type in the Friedmann equation can be tested. We consider some astronomical tests to constrain the density parameters in such models. We describe different interpretations of such an additional term: geometric effects of loop quantum cosmology, effects of braneworld cosmological models, nonstandard cosmological models in metric-affine gravity, and models with spinning fluid. Kinematical (or geometrical) tests based on null geodesics are insufficient to separate individual matter components when they behave like perfect fluid and scale in the same way. Still, it is possible to measure their overall effect. Wemore » use recent measurements of the coordinate distances from the Fanaroff-Riley type IIb radio galaxy data, supernovae type Ia data, baryon oscillation peak and cosmic microwave background radiation observations to obtain stronger bounds for the contribution of the type considered. We demonstrate that, while {rho}{sup 2} corrections are very small, they can be tested by astronomical observations--at least in principle. Bayesian criteria of model selection (the Bayesian factor, AIC, and BIC) are used to check if additional parameters are detectable in the present epoch. As it turns out, the {lambda}CDM model is favored over the bouncing model driven by loop quantum effects. Or, in other words, the bounds obtained from cosmography are very weak, and from the point of view of the present data this model is indistinguishable from the {lambda}CDM one.« less

  2. Three sympatric clusters of the malaria vector Anopheles culicifacies E (Diptera: Culicidae) detected in Sri Lanka.

    PubMed

    Harischandra, Iresha Nilmini; Dassanayake, Ranil Samantha; De Silva, Bambaranda Gammacharige Don Nissanka Kolitha

    2016-01-04

    The disease re-emergence threat from the major malaria vector in Sri Lanka, Anopheles culicifacies, is currently increasing. To predict malaria vector dynamics, knowledge of population genetics and gene flow is required, but this information is unavailable for Sri Lanka. This study was carried out to determine the population structure of An. culicifacies E in Sri Lanka. Eight microsatellite markers were used to examine An. culicifacies E collected from six sites in Sri Lanka during 2010-2012. Standard population genetic tests and analyses, genetic differentiation, Hardy-Weinberg equilibrium, linkage disequilibrium, Bayesian cluster analysis, AMOVA, SAMOVA and isolation-by-distance were conducted using five polymorphic loci. Five microsatellite loci were highly polymorphic with high allelic richness. Hardy-Weinberg Equilibrium (HWE) was significantly rejected for four loci with positive F(IS) values in the pooled population (p < 0.0100). Three loci showed high deviations in all sites except Kataragama, which was in agreement with HWE for all loci except one locus (p < 0.0016). Observed heterozygosity was less than the expected values for all sites except Kataragama, where reported negative F(IS) values indicated a heterozygosity excess. Genetic differentiation was observed for all sampling site pairs and was not supported by the isolation by distance model. Bayesian clustering analysis identified the presence of three sympatric clusters (gene pools) in the studied population. Significant genetic differentiation was detected in cluster pairs with low gene flow and isolation by distance was not detected between clusters. Furthermore, the results suggested the presence of a barrier to gene flow that divided the populations into two parts with the central hill region of Sri Lanka as the dividing line. Three sympatric clusters were detected among An. culicifacies E specimens isolated in Sri Lanka. There was no effect of geographic distance on genetic differentiation and the central mountain ranges in Sri Lanka appeared to be a barrier to gene flow.

  3. HD 209458b in new light: evidence of nitrogen chemistry, patchy clouds and sub-solar water

    NASA Astrophysics Data System (ADS)

    MacDonald, Ryan J.; Madhusudhan, Nikku

    2017-08-01

    Interpretations of exoplanetary transmission spectra have been undermined by apparent obscuration due to clouds/hazes. Debate rages on whether weak H2O features seen in exoplanet spectra are due to clouds or inherently depleted oxygen. Assertions of solar H2O abundances have relied on making a priori model assumptions, for example, chemical/radiative equilibrium. In this work, we attempt to address this problem with a new retrieval paradigm for transmission spectra. We introduce poseidon, a two-dimensional atmospheric retrieval algorithm including generalized inhomogeneous clouds. We demonstrate that this prescription allows one to break vital degeneracies between clouds and prominent molecular abundances. We apply poseidon to the best transmission spectrum presently available, for the hot Jupiter HD 209458b, uncovering new insights into its atmosphere at the day-night terminator. We extensively explore the parameter space with an unprecedented 108 models, spanning the continuum from fully cloudy to cloud-free atmospheres, in a fully Bayesian retrieval framework. We report the first detection of nitrogen chemistry (NH3 and/or HCN) in an exoplanet atmosphere at 3.7-7.7σ confidence, non-uniform cloud coverage at 4.5-5.4σ, high-altitude hazes at >3σ and sub-solar H2O at ≳3-5σ, depending on the assumed cloud distribution. We detect NH3 at 3.3σ, and 4.9σ for fully cloudy and cloud-free scenarios, respectively. For the model with the highest Bayesian evidence, we constrain H2O at 5-15 ppm (0.01-0.03) × solar and NH3 at 0.01-2.7 ppm, strongly suggesting disequilibrium chemistry and cautioning against equilibrium assumptions. Our results herald a new promise for retrieving cloudy atmospheres using high-precision Hubble Space Telescope and James Webb Space Telescope spectra.

  4. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less

  5. On "black swans" and "perfect storms": risk analysis and management when statistics are not enough.

    PubMed

    Paté-Cornell, Elisabeth

    2012-11-01

    Two images, "black swans" and "perfect storms," have struck the public's imagination and are used--at times indiscriminately--to describe the unthinkable or the extremely unlikely. These metaphors have been used as excuses to wait for an accident to happen before taking risk management measures, both in industry and government. These two images represent two distinct types of uncertainties (epistemic and aleatory). Existing statistics are often insufficient to support risk management because the sample may be too small and the system may have changed. Rationality as defined by the von Neumann axioms leads to a combination of both types of uncertainties into a single probability measure--Bayesian probability--and accounts only for risk aversion. Yet, the decisionmaker may also want to be ambiguity averse. This article presents an engineering risk analysis perspective on the problem, using all available information in support of proactive risk management decisions and considering both types of uncertainty. These measures involve monitoring of signals, precursors, and near-misses, as well as reinforcement of the system and a thoughtful response strategy. It also involves careful examination of organizational factors such as the incentive system, which shape human performance and affect the risk of errors. In all cases, including rare events, risk quantification does not allow "prediction" of accidents and catastrophes. Instead, it is meant to support effective risk management rather than simply reacting to the latest events and headlines. © 2012 Society for Risk Analysis.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z. D.; Wang, J.; Department of Chemistry, SUNY Stony Brook, New York 11794

    We established a theoretical framework in terms of the curl flux, population landscape, and coherence for non-equilibrium quantum systems at steady state, through exploring the energy and charge transport in molecular processes. The curl quantum flux plays the key role in determining transport properties and the system reaches equilibrium when flux vanishes. The novel curl quantum flux reflects the degree of non-equilibriumness and the time-irreversibility. We found an analytical expression for the quantum flux and its relationship to the environmental pumping (non-equilibriumness quantified by the voltage away from the equilibrium) and the quantum tunneling. Furthermore, we investigated another quantum signature,more » the coherence, quantitatively measured by the non-zero off diagonal element of the density matrix. Populations of states give the probabilities of individual states and therefore quantify the population landscape. Both curl flux and coherence depend on steady state population landscape. Besides the environment-assistance which can give dramatic enhancement of coherence and quantum flux with high voltage at a fixed tunneling strength, the quantum flux is promoted by the coherence in the regime of small tunneling while reduced by the coherence in the regime of large tunneling, due to the non-monotonic relationship between the coherence and tunneling. This is in contrast to the previously found linear relationship. For the systems coupled to bosonic (photonic and phononic) reservoirs the flux is significantly promoted at large voltage while for fermionic (electronic) reservoirs the flux reaches a saturation after a significant enhancement at large voltage due to the Pauli exclusion principle. In view of the system as a quantum heat engine, we studied the non-equilibrium thermodynamics and established the analytical connections of curl quantum flux to the transport quantities such as energy (charge) transfer efficiency, chemical reaction efficiency, energy dissipation, heat and electric currents observed in the experiments. We observed a perfect transfer efficiency in chemical reactions at high voltage (chemical potential difference). Our theoretical predicted behavior of the electric current with respect to the voltage is in good agreements with the recent experiments on electron transfer in single molecules.« less

  7. Association of LMX1A genetic polymorphisms with susceptibility to congenital scoliosis in Chinese Han population.

    PubMed

    Wu, Nan; Yuan, Suomao; Liu, Jiaqi; Chen, Jun; Fei, Qi; Liu, Sen; Su, Xinlin; Wang, Shengru; Zhang, Jianguo; Li, Shugang; Wang, Yipeng; Qiu, Guixing; Wu, Zhihong

    2014-10-01

    A genetic association study of single nucleotide polymorphisms (SNPs) for the LMX1A gene with congenital scoliosis (CS) in the Chinese Han population. To determine whether LMX1A genetic polymorphisms are associated with susceptibility to CS. CS is a lateral curvature of the spine due to congenital vertebral defects, whose exact genetic cause has not been well established. The LMX1A gene was suggested as a potential human candidate gene for CS. However, no genetic study of LMX1A in CS has ever been reported. We genotyped 13 SNPs of the LMX1A gene in 154 patients with CS and 144 controls with matched sex and age. After conducting the Hardy-Weinberg equilibrium test, the data of 13 SNPs were analyzed by the allelic and genotypic association with logistic regression analysis. Furthermore, the genotype-phenotype association and haplotype association analysis were also performed. The 13 SNPs of the LMX1A gene met Hardy-Weinberg equilibrium in the controls, which was not in the cases. None of the allelic and genotypic frequencies of these SNPs showed significant difference between case and control groups (P > 0.05). However, the genotypic frequencies of rs1354510 and rs16841013 in the LMX1A gene were associated with CS predisposition in the unconditional logistic regression analysis (P = 0.02 and 0.018, respectively). Genotypic frequencies of 3 SNPs at rs6671290, rs1354510, and rs16841013 were found to exhibit significant differences between patients with CS with failure of formation and the healthy controls (P = 0.019, 0.007, and 0.006, respectively). Besides, in the model analysis by using unconditional logistic regression analysis, the optimized model for the 3 genotypic positive SNPs with failure of formation were rs6671290 (codominant; P = 0.025, Akaike information value = 316.6, Bayesian information criterion = 333.9), rs1354510 (overdominant; P = 0.0017, Akaike information value = 312.1, Bayesian information criterion = 325.9), and rsl6841013 (overdominant; P = 0.0016, Akaike information value = 311.1, Bayesian information criterion = 325), respectively. However, the haplotype distributions in the case group were not significantly different from those of the control group in the 3 haplotype blocks. To our knowledge, this is the first study to identify that the SNPs of the LMX1A gene might be associated with the susceptibility to CS and different clinical phenotypes of CS in the Chinese Han population. 4.

  8. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method.

    PubMed

    Norris, Peter M; da Silva, Arlindo M

    2016-07-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  9. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 1: Method

    NASA Technical Reports Server (NTRS)

    Norris, Peter M.; Da Silva, Arlindo M.

    2016-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  10. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method

    PubMed Central

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847

  11. The physics of mental acts: coherence and creativity

    NASA Astrophysics Data System (ADS)

    Tito Arecchi, F.

    2009-06-01

    Coherence is a long range order absent at thermal equilibrium, where a system is the superposition of many uncorrelated components. To build non-trivial correlations, the system must enter a nonlinear dynamical regime. The nonlinearity leads to a multiplicity of equilibrium states, the number of which increases exponentially with the number of partners; we call complexity such a situation. Complete exploration of complexity would require a very large amount of time. On the contrary, in cognitive tasks, one reaches a decision within a few hundred milliseconds. Neuron synchronization lasting around 301 msec is the indicator of a conscious perception (Gestalt); however, the loss of information in the chaotic spike train of a single neuron takes a few msec, thus a conscious perception implies a control of chaos, whereby the information stored in a brain area survives for a time sufficient to elicit an action. Control of chaos is achieved by the interaction of a bottom-up stimulus with a top-down control (induced by the semantic memory). We call creativity this optimal control of neuronal chaos; it goes beyond the Bayesian inference, which is the way a computer operates, thus it represent a non-algorithmic step.

  12. Vaginitis: diagnosis and management.

    PubMed

    Faro, S

    1996-01-01

    The various conditions that give rise to vaginitis include specific and nonspecific entities, such as candidiasis, trichomoniasis, bacterial vaginosis, group B streptococcal vaginitis, purulent vaginitis, volvodynia, and vestibulitis. The patient with chronic vaginitis usually develops this condition because of a misdiagnosis. It is critical that patients who have chronic vaginitis be thoroughly evaluated to determine if there is a specific etiology and whether their condition is recurrent or persistent, or is a reinfection. This also must include obtaining a detailed history, beginning with the patient's best recollection of when she felt perfectly normal. The physician must have an understanding of a healthy vaginal ecosystem and what mechanisms are in place to maintain the equilibrium. The vaginal ecosystem is a complex system of micro-organisms interacting with host factors to maintain its equilibrium. The endogenous microflora consists of a variety of bacteria, which include aerobic, facultative and obligate anaerobic bacteria. These organisms exist in a commensal, synergistic or antagonistic relationship. Therefore, it is important to understand what factors control the delicate equilibrium of the vaginal ecosystem, and which factors, both endogenous and exogenous, can disrupt this system. It is also important for the physician to understand that when a patient has symptoms of vaginitis it is not always due to an infectious etiology. There are situations in which an inflammatory reaction occurs but the specific etiology may not be determined. Thus, it is important that the physician not rush through the history or the examination.

  13. Continuous equilibrium scores: factoring in the time before a fall.

    PubMed

    Wood, Scott J; Reschke, Millard F; Owen Black, F

    2012-07-01

    The equilibrium (EQ) score commonly used in computerized dynamic posturography is normalized between 0 and 100, with falls assigned a score of 0. The resulting mixed discrete-continuous distribution limits certain statistical analyses and treats all trials with falls equally. We propose a simple modification of the formula in which peak-to-peak sway data from trials with falls is scaled according the percent of the trial completed to derive a continuous equilibrium (cEQ) score. The cEQ scores for trials without falls remain unchanged from the original methodology. The cEQ factors in the time before a fall and results in a continuous variable retaining the central tendencies of the original EQ distribution. A random set of 5315 Sensory Organization Test trials were pooled that included 81 falls. A comparison of the original and cEQ distributions and their rank ordering demonstrated that trials with falls continue to constitute the lower range of scores with the cEQ methodology. The area under the receiver operating characteristic curve (0.997) demonstrates that the cEQ retained near-perfect discrimination between trials with and without falls. We conclude that the cEQ score provides the ability to discriminate between ballistic falls from falls that occur later in the trial. This approach of incorporating time and sway magnitude can be easily extended to enhance other balance tests that include fall data or incomplete trials. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Buckling of pressure-loaded, long, shear deformable, cylindrical laminated shells

    NASA Astrophysics Data System (ADS)

    Anastasiadis, John S.; Simitses, George J.

    A higher-order shell theory was developed (kinematic relations, constitutive relations, equilibrium equations and boundary conditions), which includes initial geometric imperfections and transverse shear effects for a laminated cylindrical shell under the action of pressure, axial compression and in-plane shear. Through the perturbation technique, buckling equations are derived for the corresponding 'perfect geometry' symmetric laminated configuration. Critical pressures are computed for very long cylinders for several stacking sequences, several radius-to-total-thickness ratios, three lamina materials (boron/epoxy, graphite/epoxy, and Kevlar/epoxy), and three shell theories: classical, first-order shear deformable and higher- (third-)order shear deformable. The results provide valuable information concerning the applicability (accurate prediction of buckling pressures) of the various shell theories.

  15. Effect of surface on the dissociation of perfect dislocations into Shockley partials describing the herringbone Au(1\\xA01\\xA01) surface reconstruction

    NASA Astrophysics Data System (ADS)

    Ait-Oubba, A.; Coupeau, C.; Durinck, J.; Talea, M.; Grilhé, J.

    2018-06-01

    In the framework of the continuum elastic theory, the equilibrium positions of Shockley partial dislocations have been determined as a function of their distance from the free surface. It is found that the dissociation width decreases with the decreasing depth, except for a depth range very close to the free surface for which the dissociation width is enlarged. A similar behaviour is also predicted when Shockley dislocation pairs are regularly arranged, whatever the wavelength. These results derived from the elastic theory are compared to STM observations of the reconstructed (1 1 1) surface in gold, which is usually described by a Shockley dislocations network.

  16. The Ecology of Defensive Medicine and Malpractice Litigation

    PubMed Central

    2016-01-01

    Using an evolutionary game, we show that patients and physicians can interact with predator-prey relationships. Litigious patients who seek compensation are the ‘predators’ and physicians are their ‘prey’. Physicians can adapt to the risk of being sued by performing defensive medicine. We find that improvements in clinical safety can increase the share of litigious patients and leave unchanged the share of physicians who perform defensive medicine. This paradoxical result is consistent with increasing trends in malpractice claims in spite of safety improvements, observed for example in empirical studies on anesthesiologists. Perfect cooperation with neither defensive nor litigious behaviors can be the Pareto-optimal solution when it is not a Nash equilibrium, so maximizing social welfare may require government intervention. PMID:26982056

  17. MBE growth technology for high quality strained III-V layers

    NASA Technical Reports Server (NTRS)

    Grunthaner, Frank J. (Inventor); Liu, John K. (Inventor); Hancock, Bruce R. (Inventor)

    1990-01-01

    The III-V films are grown on large automatically perfect terraces of III-V substrates which have a different lattice constant, with temperature and Group III and V arrival rates chosen to give a Group III element stable surface. The growth is pulsed to inhibit Group III metal accumulation of low temperature, and to permit the film to relax to equilibrium. The method of the invention: (1) minimizes starting step density on sample surface; (2) deposits InAs and GaAs using an interrupted growth mode (0.25 to 2 monolayers at a time); (3) maintains the instantaneous surface stoichiometry during growth (As-stable for GaAs, In-stable for InAs); and (4) uses time-resolved RHEED to achieve aspects (1) through (3).

  18. Dynamic and static maintenance of epigenetic memory in pluripotent and somatic cells.

    PubMed

    Shipony, Zohar; Mukamel, Zohar; Cohen, Netta Mendelson; Landan, Gilad; Chomsky, Elad; Zeliger, Shlomit Reich; Fried, Yael Chagit; Ainbinder, Elena; Friedman, Nir; Tanay, Amos

    2014-09-04

    Stable maintenance of gene regulatory programs is essential for normal function in multicellular organisms. Epigenetic mechanisms, and DNA methylation in particular, are hypothesized to facilitate such maintenance by creating cellular memory that can be written during embryonic development and then guide cell-type-specific gene expression. Here we develop new methods for quantitative inference of DNA methylation turnover rates, and show that human embryonic stem cells preserve their epigenetic state by balancing antagonistic processes that add and remove methylation marks rather than by copying epigenetic information from mother to daughter cells. In contrast, somatic cells transmit considerable epigenetic information to progenies. Paradoxically, the persistence of the somatic epigenome makes it more vulnerable to noise, since random epimutations can accumulate to massively perturb the epigenomic ground state. The rate of epigenetic perturbation depends on the genomic context, and, in particular, DNA methylation loss is coupled to late DNA replication dynamics. Epigenetic perturbation is not observed in the pluripotent state, because the rapid turnover-based equilibrium continuously reinforces the canonical state. This dynamic epigenetic equilibrium also explains how the epigenome can be reprogrammed quickly and to near perfection after induced pluripotency.

  19. Bayesian Correction of Misclassification of Pertussis in Vaccine Effectiveness Studies: How Much Does Underreporting Matter?

    PubMed

    Goldstein, Neal D; Burstyn, Igor; Newbern, E Claire; Tabb, Loni P; Gutowski, Jennifer; Welles, Seth L

    2016-06-01

    Diagnosis of pertussis remains a challenge, and consequently research on the risk of disease might be biased because of misclassification. We quantified this misclassification and corrected for it in a case-control study of children in Philadelphia, Pennsylvania, who were 3 months to 6 years of age and diagnosed with pertussis between 2011 and 2013. Vaccine effectiveness (VE; calculated as (1 - odds ratio) × 100) was used to describe the average reduction in reported pertussis incidence resulting from persons being up to date on pertussis-antigen containing vaccines. Bayesian techniques were used to correct for purported nondifferential misclassification by reclassifying the cases per the 2014 Council of State and Territorial Epidemiologists pertussis case definition. Naïve VE was 50% (95% confidence interval: 16%, 69%). After correcting for misclassification, VE ranged from 57% (95% credible interval: 30, 73) to 82% (95% credible interval: 43, 95), depending on the amount of underreporting of pertussis that was assumed to have occurred in the study period. Meaningful misclassification was observed in terms of false negatives detected after the incorporation of infant apnea to the 2014 case definition. Although specificity was nearly perfect, sensitivity of the case definition varied from 90% to 20%, depending on the assumption about missed cases. Knowing the degree of the underreporting is essential to the accurate evaluation of VE. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Bayesian estimation of post-Messinian divergence times in Balearic Island lizards.

    PubMed

    Brown, R P; Terrasa, B; Pérez-Mellado, V; Castro, J A; Hoskisson, P A; Picornell, A; Ramon, M M

    2008-07-01

    Phylogenetic relationships and timings of major cladogenesis events are investigated in the Balearic Island lizards Podarcislilfordi and P.pityusensis using 2675bp of mitochondrial and nuclear DNA sequences. Partitioned Bayesian and Maximum Parsimony analyses provided a well-resolved phylogeny with high node-support values. Bayesian MCMC estimation of node dates was investigated by comparing means of posterior distributions from different subsets of the sequence against the most robust analysis which used multiple partitions and allowed for rate heterogeneity among branches under a rate-drift model. Evolutionary rates were systematically underestimated and thus divergence times overestimated when sequences containing lower numbers of variable sites were used (based on ingroup node constraints). The following analyses allowed the best recovery of node times under the constant-rate (i.e., perfect clock) model: (i) all cytochrome b sequence (partitioned by codon position), (ii) cytochrome b (codon position 3 alone), (iii) NADH dehydrogenase (subunits 1 and 2; partitioned by codon position), (iv) cytochrome b and NADH dehydrogenase sequence together (six gene-codon partitions), (v) all unpartitioned sequence, (vi) a full multipartition analysis (nine partitions). Of these, only (iv) and (vi) performed well under the rate-drift model. These findings have significant implications for dating of recent divergence times in other taxa. The earliest P.lilfordi cladogenesis event (divergence of Menorcan populations), occurred before the end of the Pliocene, some 2.6Ma. Subsequent events led to a West Mallorcan lineage (2.0Ma ago), followed 1.2Ma ago by divergence of populations from the southern part of the Cabrera archipelago from a widely-distributed group from north Cabrera, northern and southern Mallorcan islets. Divergence within P.pityusensis is more recent with the main Ibiza and Formentera clades sharing a common ancestor at about 1.0Ma ago. Climatic and sea level changes are likely to have initiated cladogenesis, with lineages making secondary contact during periodic landbridge formation. This oscillating cross-archipelago pattern in which ancient divergence is followed by repeated contact resembles that seen between East-West refugia populations from mainland Europe.

  1. Quantifying Listeria monocytogenes prevalence and concentration in minced pork meat and estimating performance of three culture media from presence/absence microbiological testing using a deterministic and stochastic approach.

    PubMed

    Andritsos, Nikolaos D; Mataragas, Marios; Paramithiotis, Spiros; Drosinos, Eleftherios H

    2013-12-01

    Listeria monocytogenes poses a serious threat to public health, and the majority of cases of human listeriosis are associated with contaminated food. Reliable microbiological testing is needed for effective pathogen control by food industry and competent authorities. The aims of this work were to estimate the prevalence and concentration of L. monocytogenes in minced pork meat by the application of a Bayesian modeling approach, and also to determine the performance of three culture media commonly used for detecting L. monocytogenes in foods from a deterministic and stochastic perspective. Samples (n = 100) collected from local markets were tested for L. monocytogenes using in parallel the PALCAM, ALOA and RAPID'L.mono selective media according to ISO 11290-1:1996 and 11290-2:1998 methods. Presence of the pathogen was confirmed by conducting biochemical and molecular tests. Independent experiments (n = 10) for model validation purposes were performed. Performance attributes were calculated from the presence-absence microbiological test results by combining the results obtained from the culture media and confirmative tests. Dirichlet distribution, the multivariate expression of a Beta distribution, was used to analyze the performance data from a stochastic perspective. No L. monocytogenes was enumerated by direct-plating (<10 CFU/g), though the pathogen was detected in 22% of the samples. L. monocytogenes concentration was estimated at 14-17 CFU/kg. Validation showed good agreement between observed and predicted prevalence (error = -2.17%). The results showed that all media were best at ruling in L. monocytogenes presence than ruling it out. Sensitivity and specificity varied depending on the culture-dependent method. None of the culture media was perfect in detecting L. monocytogenes in minced pork meat alone. The use of at least two culture media in parallel enhanced the efficiency of L. monocytogenes detection. Bayesian modeling may reduce the time needed to draw conclusions regarding L. monocytogenes presence and the uncertainty of the results obtained. Furthermore, the problem of observing zero counts may be overcome by applying Bayesian analysis, making the determination of a test performance feasible. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. A Comparison of Full and Empirical Bayes Techniques for Inferring Sea Level Changes from Tide Gauge Records

    NASA Astrophysics Data System (ADS)

    Piecuch, C. G.; Huybers, P. J.; Tingley, M.

    2016-12-01

    Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.

  3. Economic policy optimization based on both one stochastic model and the parametric control theory

    NASA Astrophysics Data System (ADS)

    Ashimov, Abdykappar; Borovskiy, Yuriy; Onalbekov, Mukhit

    2016-06-01

    A nonlinear dynamic stochastic general equilibrium model with financial frictions is developed to describe two interacting national economies in the environment of the rest of the world. Parameters of nonlinear model are estimated based on its log-linearization by the Bayesian approach. The nonlinear model is verified by retroprognosis, estimation of stability indicators of mappings specified by the model, and estimation the degree of coincidence for results of internal and external shocks' effects on macroeconomic indicators on the basis of the estimated nonlinear model and its log-linearization. On the base of the nonlinear model, the parametric control problems of economic growth and volatility of macroeconomic indicators of Kazakhstan are formulated and solved for two exchange rate regimes (free floating and managed floating exchange rates)

  4. Observations and Thermochemical Calculations for Hot-Jupiter Atmospheres

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver; Cubillos, Patricio; Stemm, Madison

    2015-01-01

    I present Spitzer eclipse observations for WASP-14b and WASP-43b, an open source tool for thermochemical equilibrium calculations, and components of an open source tool for atmospheric parameter retrieval from spectroscopic data. WASP-14b is a planet that receives high irradiation from its host star, yet, although theory does not predict it, the planet hosts a thermal inversion. The WASP-43b eclipses have signal-to-noise ratios of ~25, one of the largest among exoplanets. To assess these planets' atmospheric composition and thermal structure, we developed an open-source Bayesian Atmospheric Radiative Transfer (BART) code. My dissertation tasks included developing a Thermochemical Equilibrium Abundances (TEA) code, implementing the eclipse geometry calculation in BART's radiative transfer module, and generating parameterized pressure and temperature profiles so the radiative-transfer module can be driven by the statistical module.To initialize the radiative-transfer calculation in BART, TEA calculates the equilibrium abundances of gaseous molecular species at a given temperature and pressure. It uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA, written in Python, is modular, documented, and available to the community via the open-source development site GitHub.com.Support for this work was provided by NASA Headquarters under the NASA Earth and Space Science Fellowship Program, grant NNX12AL83H, by NASA through an award issued by JPL/Caltech, and through the Science Mission Directorate's Planetary Atmospheres Program, grant NNX12AI69G.

  5. Empirical models of transitions between coral reef states: effects of region, protection, and environmental change.

    PubMed

    Lowe, Phillip K; Bruno, John F; Selig, Elizabeth R; Spencer, Matthew

    2011-01-01

    There has been substantial recent change in coral reef communities. To date, most analyses have focussed on static patterns or changes in single variables such as coral cover. However, little is known about how community-level changes occur at large spatial scales. Here, we develop Markov models of annual changes in coral and macroalgal cover in the Caribbean and Great Barrier Reef (GBR) regions. We analyzed reef surveys from the Caribbean and GBR (1996-2006). We defined a set of reef states distinguished by coral and macroalgal cover, and obtained Bayesian estimates of the annual probabilities of transitions between these states. The Caribbean and GBR had different transition probabilities, and therefore different rates of change in reef condition. This could be due to differences in species composition, management or the nature and extent of disturbances between these regions. We then estimated equilibrium probability distributions for reef states, and coral and macroalgal cover under constant environmental conditions. In both regions, the current distributions are close to equilibrium. In the Caribbean, coral cover is much lower and macroalgal cover is higher at equilibrium than in the GBR. We found no evidence for differences in transition probabilities between the first and second halves of our survey period, or between Caribbean reefs inside and outside marine protected areas. However, our power to detect such differences may have been low. We also examined the effects of altering transition probabilities on the community state equilibrium, along a continuum from unfavourable (e.g., increased sea surface temperature) to favourable (e.g., improved management) conditions. Both regions showed similar qualitative responses, but different patterns of uncertainty. In the Caribbean, uncertainty was greatest about effects of favourable changes, while in the GBR, we are most uncertain about effects of unfavourable changes. Our approach could be extended to provide risk analysis for management decisions.

  6. Eliminating the Cuspidal Temperature Profile of a Non-equilibrium Chain

    NASA Astrophysics Data System (ADS)

    Cândido, Michael M.; M. Morgado, Welles A.; Duarte Queirós, Sílvio M.

    2017-06-01

    In 1967, Z. Rieder, J. L. Lebowitz, and E. Lieb (RLL) introduced a model of heat conduction on a crystal that became a milestone problem of non-equilibrium statistical mechanics. Along with its inability to reproduce Fourier's law—which subsequent generalizations have been trying to amend—the RLL model is also characterized by awkward cusps at the ends of the non-equilibrium chain, an effect that has endured all these years without a satisfactory answer. In this paper, we first show that such trait stems from the insufficiency of pinning interactions between the chain and the substrate. Assuming the possibility of pinning the chain, the analysis of the temperature profile in the space of parameters reveals that for a proper combination of the border and bulk pinning values, the temperature profile may shift twice between the RLL cuspidal behavior and the expected monotonic local temperature evolution along the system, as a function of the pinning. At those inversions, the temperature profile along the chain is characterized by perfect plateaux: at the first threshold, the cumulants of the heat flux reach their maxima and the vanishing of the two-point velocity correlation function for all sites of the chain so that the system behaves similarly to a "phonon box." On the other hand, at the second change of the temperature profile, we still have the vanishing of the two-point correlation function but only for the bulk, which explains the emergence of the temperature plateau and thwarts the reaching of the maximal values of the cumulants of the heat flux.

  7. Data fusion for CD metrology: heterogeneous hybridization of scatterometry, CDSEM, and AFM data

    NASA Astrophysics Data System (ADS)

    Hazart, J.; Chesneau, N.; Evin, G.; Largent, A.; Derville, A.; Thérèse, R.; Bos, S.; Bouyssou, R.; Dezauzier, C.; Foucher, J.

    2014-04-01

    The manufacturing of next generation semiconductor devices forces metrology tool providers for an exceptional effort in order to meet the requirements for precision, accuracy and throughput stated in the ITRS. In the past years hybrid metrology (based on data fusion theories) has been investigated as a new methodology for advanced metrology [1][2][3]. This paper provides a new point of view of data fusion for metrology through some experiments and simulations. The techniques are presented concretely in terms of equations to be solved. The first point of view is High Level Fusion which is the use of simple numbers with their associated uncertainty postprocessed by tools. In this paper, it is divided into two stages: one for calibration to reach accuracy, the second to reach precision thanks to Bayesian Fusion. From our perspective, the first stage is mandatory before applying the second stage which is commonly presented [1]. However a reference metrology system is necessary for this fusion. So, precision can be improved if and only if the tools to be fused are perfectly matched at least for some parameters. We provide a methodology similar to a multidimensional TMU able to perform this matching exercise. It is demonstrated on a 28 nm node backend lithography case. The second point of view is Deep Level Fusion which works on the contrary with raw data and their combination. In the approach presented here, the analysis of each raw data is based on a parametric model and connections between the parameters of each tool. In order to allow OCD/SEM Deep Level Fusion, a SEM Compact Model derived from [4] has been developed and compared to AFM. As far as we know, this is the first time such techniques have been coupled at Deep Level. A numerical study on the case of a simple stack for lithography is performed. We show strict equivalence of Deep Level Fusion and High Level Fusion when tools are sensitive and models are perfect. When one of the tools can be considered as a reference and the second is biased, High Level Fusion is far superior to standard Deep Level Fusion. Otherwise, only the second stage of High Level Fusion is possible (Bayesian Fusion) and do not provide substantial advantage. Finally, when OCD is equipped with methods for bias detection [5], Deep Level Fusion outclasses the two-stage High Level Fusion and will benefit to the industry for most advanced nodes production.

  8. The Impacts of Regulations and Financial Development on the Operations of Supply Chains with Greenhouse Gas Emissions

    PubMed Central

    Xiao, Zhuang; Tian, Yixiang; Yuan, Zheng

    2018-01-01

    To establish a micro foundation to understand the impacts of greenhouse gas (GHG) emission regulations and financial development levels on firms’ GHG emissions, we build a two-stage dynamic game model to incorporate GHG emission regulations (in terms of an emission tax) and financial development (represented by the corresponding financing cost) into a two-echelon supply chain. With the subgame perfect equilibrium, we identify the conditions to determine whether an emission regulatory policy and/or financial development can affect GHG emissions in the supply chain. We also reveal the impacts of the strictness of GHG emission regulation, the financial development level, and the unit GHG emission rate on the operations of the supply chain and the corresponding profitability implications. Managerial insights are also discussed. PMID:29470451

  9. Atmospheric Retrievals of HAT-P-16b and WASP-11b/HAT-P-10b

    NASA Astrophysics Data System (ADS)

    McIntyre, Kathleen; Harrington, Joseph; Challener, Ryan; Lenius, Maria; Hartman, Joel D.; Bakos, Gaspar A.; Blecic, Jasmina; Cubillos, Patricio E.; Cameron, Andrew

    2018-01-01

    We report Bayesian atmospheric retrievals performed on the exoplanets HAT-P-16b and WASP-11b/HAT-P-10b. HAT-P-16b is a hot (equilibrium temperature 1626 ± 40 K, assuming zero Bond albedo and efficient energy redistribution), 4.19 ± 0.09 Jupiter-mass exoplanet orbiting an F8 star every 2.775960 ± 0.000003 days (Buchhave et al 2010). WASP-11b/HAT-P-10b is a cooler (1020 ± 17 K), 0.487 ± 0.018 Jupiter-mass exoplanet orbiting a K3 star every 3.7224747 ± 0.0000065 days (Bakos et al. 2009, co-discovered by West et al. 2008). We observed secondary eclipses of both planets using the 3.6 μm and 4.5 μm channels of the Spitzer Space Telescope's Infrared Array Camera (program ID 60003). We applied our Photometry for Orbits, Eclipses, and Transits (POET) code to produce normalized eclipse light curves, and our Bayesian Atmospheric Radiative Transfer (BART) code to constrain the temperature-pressure profiles and atmospheric molecular abundances of the two planets. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

  10. The Gaia-ESO Survey: dynamical models of flattened, rotating globular clusters

    NASA Astrophysics Data System (ADS)

    Jeffreson, S. M. R.; Sanders, J. L.; Evans, N. W.; Williams, A. A.; Gilmore, G. F.; Bayo, A.; Bragaglia, A.; Casey, A. R.; Flaccomio, E.; Franciosini, E.; Hourihane, A.; Jackson, R. J.; Jeffries, R. D.; Jofré, P.; Koposov, S.; Lardo, C.; Lewis, J.; Magrini, L.; Morbidelli, L.; Pancino, E.; Randich, S.; Sacco, G. G.; Worley, C. C.; Zaggia, S.

    2017-08-01

    We present a family of self-consistent axisymmetric rotating globular cluster models which are fitted to spectroscopic data for NGC 362, NGC 1851, NGC 2808, NGC 4372, NGC 5927 and NGC 6752 to provide constraints on their physical and kinematic properties, including their rotation signals. They are constructed by flattening Modified Plummer profiles, which have the same asymptotic behaviour as classical Plummer models, but can provide better fits to young clusters due to a slower turnover in the density profile. The models are in dynamical equilibrium as they depend solely on the action variables. We employ a fully Bayesian scheme to investigate the uncertainty in our model parameters (including mass-to-light ratios and inclination angles) and evaluate the Bayesian evidence ratio for rotating to non-rotating models. We find convincing levels of rotation only in NGC 2808. In the other clusters, there is just a hint of rotation (in particular, NGC 4372 and NGC 5927), as the data quality does not allow us to draw strong conclusions. Where rotation is present, we find that it is confined to the central regions, within radii of R ≤ 2rh. As part of this work, we have developed a novel q-Gaussian basis expansion of the line-of-sight velocity distributions, from which general models can be constructed via interpolation on the basis coefficients.

  11. Emulsification kinetics during quasi-miscible flow in dead-end pores

    NASA Astrophysics Data System (ADS)

    Broens, M.; Unsal, E.

    2018-03-01

    Microemulsions have found applications as carriers for the transport of solutes through various porous media. They are commonly pre-prepared in bulk form, and then injected into the medium. The preparation is done by actively mixing the surfactant, water and oil, and then allowing the mixture to stagnate until equilibrium is reached. The resulting microemulsion characteristics of the surfactant/oil/water system are studied at equilibrium conditions, and perfect mixing is assumed. But in applications like subsurface remediation and enhanced oil recovery, microemulsion formation may occur in the pore space. Surfactant solutions are injected into the ground to solubilize and/or mobilize the non-aqueous phase liquids (NAPLs) by in-situ emulsification. Flow dynamics and emulsification kinetics are coupled, which also contributes to in-situ mixing. In this study, we investigated the nature of such coupling for a quasi-miscible fluid system in a conductive channel with dead-end extensions. A microfluidic setup was used, where an aqueous solution of an anionic, internal olefin sulfonate 20-24 (IOS) surfactant was injected into n-decane saturated glass micromodel. The oil phase was coloured using a solvatochromatic dye allowing for direct visualization of the aqueous and oil phases as well as their microemulsions under fluorescent light. Presence of both conductive and stagnant dead-end channels in a single pore system made it possible to isolate different transport mechanisms from each other but also allowed to study the transitions from one to the other. In the conductive channel, the surfactant was carried with flow, and emulsification was controlled by the localized flow dynamics. In the stagnant zones, the driving force of the mass transfer was driven by the chemical concentration gradient. Some of the equilibrium phase behaviour characteristics of the surfactant/oil/water system were recognisable during the quasi-miscible displacement. However, the equilibrium tests alone were not sufficient to predict the emulsification process under dynamic conditions.

  12. Bayesian Calibration of Thermodynamic Databases and the Role of Kinetics

    NASA Astrophysics Data System (ADS)

    Wolf, A. S.; Ghiorso, M. S.

    2017-12-01

    Self-consistent thermodynamic databases of geologically relevant materials (like Berman, 1988; Holland and Powell, 1998, Stixrude & Lithgow-Bertelloni 2011) are crucial for simulating geological processes as well as interpreting rock samples from the field. These databases form the backbone of our understanding of how fluids and rocks interact at extreme planetary conditions. Considerable work is involved in their construction from experimental phase reaction data, as they must self-consistently describe the free energy surfaces (including relative offsets) of potentially hundreds of interacting phases. Standard database calibration methods typically utilize either linear programming or least squares regression. While both produce a viable model, they suffer from strong limitations on the training data (which must be filtered by hand), along with general ignorance of many of the sources of experimental uncertainty. We develop a new method for calibrating high P-T thermodynamic databases for use in geologic applications. The model is designed to handle pure solid endmember and free fluid phases and can be extended to include mixed solid solutions and melt phases. This new calibration effort utilizes Bayesian techniques to obtain optimal parameter values together with a full family of statistically acceptable models, summarized by the posterior. Unlike previous efforts, the Bayesian Logistic Uncertain Reaction (BLUR) model directly accounts for both measurement uncertainties and disequilibrium effects, by employing a kinetic reaction model whose parameters are empirically determined from the experiments themselves. Thus, along with the equilibrium free energy surfaces, we also provide rough estimates of the activation energies, entropies, and volumes for each reaction. As a first application, we demonstrate this new method on the three-phase aluminosilicate system, illustrating how it can produce superior estimates of the phase boundaries by incorporating constraints from all available data, while automatically handling variable data quality due to a combination of measurement errors and kinetic effects.

  13. Emergence of scale-free characteristics in socio-ecological systems with bounded rationality

    PubMed Central

    Kasthurirathna, Dharshana; Piraveenan, Mahendra

    2015-01-01

    Socio–ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback–-Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio–ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems. PMID:26065713

  14. Optimization of an oligonucleotide microchip for microbial identification studies: a non-equilibrium dissociation approach

    NASA Technical Reports Server (NTRS)

    Liu, W. T.; Mirzabekov, A. D.; Stahl, D. A.

    2001-01-01

    The utility of a high-density oligonucleotide microarray (microchip) for identifying strains of five closely related bacilli (Bacillus anthracis, Bacillus cereus, Bacillus mycoides, Bacillus medusa and Bacillus subtilis) was demonstrated using an approach that compares the non-equilibrium dissociation rates ('melting curves') of all probe-target duplexes simultaneously. For this study, a hierarchical set of 30 oligonucleotide probes targeting the 16S ribosomal RNA of these bacilli at multiple levels of specificity (approximate taxonomic ranks of domain, kingdom, order, genus and species) was designed and immobilized in a high-density matrix of gel pads on a glass slide. Reproducible melting curves for probes with different levels of specificity were obtained using an optimized salt concentration. Clear discrimination between perfect match (PM) and mismatch (MM) duplexes was achieved. By normalizing the signals to an internal standard (a universal probe), a more than twofold discrimination (> 2.4x) was achieved between PM and 1-MM duplexes at the dissociation temperature at which 50% of the probe-target duplexes remained intact. This provided excellent differentiation among representatives of different Bacillus species, both individually and in mixtures of two or three. The overall pattern of hybridization derived from this hierarchical probe set also provided a clear 'chip fingerprint' for each of these closely related Bacillus species.

  15. Emergence of scale-free characteristics in socio-ecological systems with bounded rationality.

    PubMed

    Kasthurirathna, Dharshana; Piraveenan, Mahendra

    2015-06-11

    Socio-ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback--Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio-ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems.

  16. Bulk development and stringent selection of microsatellite markers in the western flower thrips Frankliniella occidentalis

    PubMed Central

    Cao, Li-Jun; Li, Ze-Min; Wang, Ze-Hua; Zhu, Liang; Gong, Ya-Jun; Chen, Min; Wei, Shu-Jun

    2016-01-01

    Recent improvements in next-generation sequencing technologies have enabled investigation of microsatellites on a genome-wide scale. Faced with a huge amount of candidates, the use of appropriate marker selection criteria is crucial. Here, we used the western flower thrips Frankliniella occidentalis for an empirical microsatellite survey and validation; 132,251 candidate microsatellites were identified, 92,102 of which were perfect. Dinucleotides were the most abundant category, while (AG)n was the most abundant motif. Sixty primer pairs were designed and validated in two natural populations, of which 30 loci were polymorphic, stable, and repeatable, but not all in Hardy–Weinberg equilibrium (HWE) and linkage equilibrium. Four marker panels were constructed to understand effect of marker selection on population genetic analyses: (i) only accept loci with single nucleotide insertions (SNI); (ii) only accept the most polymorphic loci (MP); (iii) only accept loci that did not deviate from HWE, did not show SNIs, and had unambiguous peaks (SS) and (iv) all developed markers (ALL). Although the MP panel resulted in microsatellites of highest genetic diversity followed by the SNI, the SS performed best in individual assignment. Our study proposes stringent criteria for selection of microsatellites from a large-scale number of genomic candidates for population genetic studies. PMID:27197749

  17. Bulk development and stringent selection of microsatellite markers in the western flower thrips Frankliniella occidentalis.

    PubMed

    Cao, Li-Jun; Li, Ze-Min; Wang, Ze-Hua; Zhu, Liang; Gong, Ya-Jun; Chen, Min; Wei, Shu-Jun

    2016-05-20

    Recent improvements in next-generation sequencing technologies have enabled investigation of microsatellites on a genome-wide scale. Faced with a huge amount of candidates, the use of appropriate marker selection criteria is crucial. Here, we used the western flower thrips Frankliniella occidentalis for an empirical microsatellite survey and validation; 132,251 candidate microsatellites were identified, 92,102 of which were perfect. Dinucleotides were the most abundant category, while (AG)n was the most abundant motif. Sixty primer pairs were designed and validated in two natural populations, of which 30 loci were polymorphic, stable, and repeatable, but not all in Hardy-Weinberg equilibrium (HWE) and linkage equilibrium. Four marker panels were constructed to understand effect of marker selection on population genetic analyses: (i) only accept loci with single nucleotide insertions (SNI); (ii) only accept the most polymorphic loci (MP); (iii) only accept loci that did not deviate from HWE, did not show SNIs, and had unambiguous peaks (SS) and (iv) all developed markers (ALL). Although the MP panel resulted in microsatellites of highest genetic diversity followed by the SNI, the SS performed best in individual assignment. Our study proposes stringent criteria for selection of microsatellites from a large-scale number of genomic candidates for population genetic studies.

  18. Ionic electrodeposition of II-VI and III-V compounds. III. Computer simulation of quasi-rest potentials for M/sub 1/X/sub 1/ compounds analogous to CdTe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelken, R.D.

    1987-04-01

    The quasi-rest potential (QRP) has been proposed as a key quantity in characterizing compound semiconductor (e.g. CdTe) electrodeposition. This article expands the modeling/simulation representative of Cd/sub x/Te in chemical equilibrium to calculate two ''QRP's'': E/sub M/1/sub /, the mixed potential occurring immediately after current interruption and before any relaxation in double layer ion concentration and significant ion exchange/surface stoichiometry change occur, and E/sub M/2/sub /, another mixed potential occurring after the double layer ion concentrations have relaxed to their bulk values but still before any significant surface composition change occurs. Significant predictions include existence of a dramatic negative transition inmore » QRP, with negative-going deposition potential, centered on the potential of perfect stoichiometry (PPS), inequality, in general, between the PPS and E/sub M/1/sub / unless the deposit remains in equilibrium with the electrolyte (no ion exchange at open circuit), negligible sensitivity of QRP-E curves to the activity coefficient parameter implying the importance of the PPS in characterizing compound deposition, and disappearance of the transition structure for sufficiently positive Gibbs free energies.« less

  19. Towards a formal genealogical classification of the Lezgian languages (North Caucasus): testing various phylogenetic methods on lexical data.

    PubMed

    Kassian, Alexei

    2015-01-01

    A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ), Neighbor joining (NJ), Unweighted pair group method with arithmetic mean (UPGMA), Bayesian Markov chain Monte Carlo (MCMC), Unweighted maximum parsimony (UMP). Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances). Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists), the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA) have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP) have yielded less likely topologies.

  20. Towards a Formal Genealogical Classification of the Lezgian Languages (North Caucasus): Testing Various Phylogenetic Methods on Lexical Data

    PubMed Central

    Kassian, Alexei

    2015-01-01

    A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ), Neighbor joining (NJ), Unweighted pair group method with arithmetic mean (UPGMA), Bayesian Markov chain Monte Carlo (MCMC), Unweighted maximum parsimony (UMP). Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances). Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists), the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA) have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP) have yielded less likely topologies. PMID:25719456

  1. Evaluation of the Performance of Five Diagnostic Tests for Fasciola hepatica Infection in Naturally Infected Cattle Using a Bayesian No Gold Standard Approach.

    PubMed

    Mazeri, Stella; Sargison, Neil; Kelly, Robert F; Bronsvoort, Barend M deC; Handel, Ian

    2016-01-01

    The clinical and economic importance of fasciolosis has been recognised for centuries, yet diagnostic tests available for cattle are far from perfect. Test evaluation has mainly been carried out using gold standard approaches or under experimental settings, the limitations of which are well known. In this study, a Bayesian no gold standard approach was used to estimate the diagnostic sensitivity and specificity of five tests for fasciolosis in cattle. These included detailed liver necropsy including gall bladder egg count, faecal egg counting, a commercially available copro-antigen ELISA, an in-house serum excretory/secretory antibody ELISA and routine abattoir liver inspection. In total 619 cattle slaughtered at one of Scotland's biggest abattoirs were sampled, during three sampling periods spanning summer 2013, winter 2014 and autumn 2014. Test sensitivities and specificities were estimated using an extension of the Hui Walter no gold standard model, where estimates were allowed to vary between seasons if tests were a priori believed to perform differently for any reason. The results of this analysis provide novel information on the performance of these tests in a naturally infected cattle population and at different times of the year where different levels of acute or chronic infection are expected. Accurate estimates of sensitivity and specificity will allow for routine abattoir liver inspection to be used as a tool for monitoring the epidemiology of F. hepatica as well as evaluating herd health planning. Furthermore, the results provide evidence to suggest that the copro-antigen ELISA does not cross-react with Calicophoron daubneyi rumen fluke parasites, while the serum antibody ELISA does.

  2. Retrieval of exoplanet emission spectra with HyDRA

    NASA Astrophysics Data System (ADS)

    Gandhi, Siddharth; Madhusudhan, Nikku

    2018-02-01

    Thermal emission spectra of exoplanets provide constraints on the chemical compositions, pressure-temperature (P-T) profiles, and energy transport in exoplanetary atmospheres. Accurate inferences of these properties rely on the robustness of the atmospheric retrieval methods employed. While extant retrieval codes have provided significant constraints on molecular abundances and temperature profiles in several exoplanetary atmospheres, the constraints on their deviations from thermal and chemical equilibria have yet to be fully explored. Our present work is a step in this direction. We report HyDRA, a disequilibrium retrieval framework for thermal emission spectra of exoplanetary atmospheres. The retrieval code uses the standard architecture of a parametric atmospheric model coupled with Bayesian statistical inference using the Nested Sampling algorithm. For a given dataset, the retrieved compositions and P-T profiles are used in tandem with the GENESIS self-consistent atmospheric model to constrain layer-by-layer deviations from chemical and radiative-convective equilibrium in the observable atmosphere. We demonstrate HyDRA on the Hot Jupiter WASP-43b with a high-precision emission spectrum. We retrieve an H2O mixing ratio of log(H2O) = -3.54^{+0.82}_{-0.52}, consistent with previous studies. We detect H2O and a combined CO/CO2 at 8-sigma significance. We find the dayside P-T profile to be consistent with radiative-convective equilibrium within the 1-sigma limits and with low day-night redistribution, consistent with previous studies. The derived compositions are also consistent with thermochemical equilibrium for the corresponding distribution of P-T profiles. In the era of high precision and high resolution emission spectroscopy, HyDRA provides a path to retrieve disequilibrium phenomena in exoplanetary atmospheres.

  3. Aeroheating Predictions for X-34 Using an Inviscid-Boundary Layer Method

    NASA Technical Reports Server (NTRS)

    Riley, Christopher J.; Kleb, William L.; Alter, Steven J.

    1998-01-01

    Radiative equilibrium surface temperatures and surface heating rates from a combined inviscid-boundary layer method are presented for the X-34 Reusable Launch Vehicle for several points along the hypersonic descent portion of its trajectory. Inviscid, perfect-gas solutions are generated with the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and the Data-Parallel Lower-Upper Relaxation (DPLUR) code. Surface temperatures and heating rates are then computed using the Langley Approximate Three-Dimensional Convective Heating (LATCH) engineering code employing both laminar and turbulent flow models. The combined inviscid-boundary layer method provides accurate predictions of surface temperatures over most of the vehicle and requires much less computational effort than a Navier-Stokes code. This enables the generation of a more thorough aerothermal database which is necessary to design the thermal protection system and specify the vehicle's flight limits.

  4. ALARIC: An algorithm for constructing arbitrarily complex initial density distributions with low particle noise for SPH/SPMHD applications

    NASA Astrophysics Data System (ADS)

    Vela Vela, Luis; Sanchez, Raul; Geiger, Joachim

    2018-03-01

    A method is presented to obtain initial conditions for Smoothed Particle Hydrodynamic (SPH) scenarios where arbitrarily complex density distributions and low particle noise are needed. Our method, named ALARIC, tampers with the evolution of the internal variables to obtain a fast and efficient profile evolution towards the desired goal. The result has very low levels of particle noise and constitutes a perfect candidate to study the equilibrium and stability properties of SPH/SPMHD systems. The method uses the iso-thermal SPH equations to calculate hydrodynamical forces under the presence of an external fictitious potential and evolves them in time with a 2nd-order symplectic integrator. The proposed method generates tailored initial conditions that perform better in many cases than those based on purely crystalline lattices, since it prevents the appearance of anisotropies.

  5. Separation in Logistic Regression: Causes, Consequences, and Control.

    PubMed

    Mansournia, Mohammad Ali; Geroldinger, Angelika; Greenland, Sander; Heinze, Georg

    2018-04-01

    Separation is encountered in regression models with a discrete outcome (such as logistic regression) where the covariates perfectly predict the outcome. It is most frequent under the same conditions that lead to small-sample and sparse-data bias, such as presence of a rare outcome, rare exposures, highly correlated covariates, or covariates with strong effects. In theory, separation will produce infinite estimates for some coefficients. In practice, however, separation may be unnoticed or mishandled because of software limits in recognizing and handling the problem and in notifying the user. We discuss causes of separation in logistic regression and describe how common software packages deal with it. We then describe methods that remove separation, focusing on the same penalized-likelihood techniques used to address more general sparse-data problems. These methods improve accuracy, avoid software problems, and allow interpretation as Bayesian analyses with weakly informative priors. We discuss likelihood penalties, including some that can be implemented easily with any software package, and their relative advantages and disadvantages. We provide an illustration of ideas and methods using data from a case-control study of contraceptive practices and urinary tract infection.

  6. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  7. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  8. Disentangling the effects of climate, density dependence, and harvest on an iconic large herbivore's population dynamics.

    PubMed

    Koons, David N; Colchero, Fernando; Hersey, Kent; Gimenez, Olivier

    2015-06-01

    Understanding the relative effects of climate, harvest, and density dependence on population dynamics is critical for guiding sound population management, especially for ungulates in arid and semiarid environments experiencing climate change. To address these issues for bison in southern Utah, USA, we applied a Bayesian state-space model to a 72-yr time series of abundance counts. While accounting for known harvest (as well as live removal) from the population, we found that the bison population in southern Utah exhibited a strong potential to grow from low density (β0 = 0.26; Bayesian credible interval based on 95% of the highest posterior density [BCI] = 0.19-0.33), and weak but statistically significant density dependence (β1 = -0.02, BCI = -0.04 to -0.004). Early spring temperatures also had strong positive effects on population growth (Pfat1 = 0.09, BCI = 0.04-0.14), much more so than precipitation and other temperature-related variables (model weight > three times more than that for other climate variables). Although we hypothesized that harvest is the primary driving force of bison population dynamics in southern Utah, our elasticity analysis indicated that changes in early spring temperature could have a greater relative effect on equilibrium abundance than either harvest or. the strength of density dependence. Our findings highlight the utility of incorporating elasticity analyses into state-space population models, and the need to include climatic processes in wildlife management policies and planning.

  9. Bayesian data analysis for newcomers.

    PubMed

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.

  10. Genetic characterization of Colombian Bahman cattle using microsatellites markers.

    PubMed

    Gómez, Y M; Fernandez, M; Rivera, D; Gómez, G; Bernal, J E

    2013-07-01

    Genetic structure and diversity of 3789 animals of the Brahman breed from 23 Colombian regions were assessed. Considering the Brahman Zebu cattle as a single population, the multilocus test based on the HW equilibrium, shows significant differences (P < 0.001). Genetic characterization made on the cattle population allowed to examine the genetic variability, calculating a H(o) = 0.6621. Brahman population in Colombia was a small subdivision within populations (F(it) = 0.045), a geographic subdivision almost non-existent or low differentiation (F(st) = 0.003) and the F(is) calculated (0.042) indicates no detriment to the variability in the population, despite the narrow mating takes place or there is a force that causes the variability is sustained without inbreeding actually affect the cattle population. The outcomes of multivariate analyses, Bayesian inferences and interindividual genetic distances suggested that there is no genetic sub-structure in the population, because of the high rate of animal migration among regions.

  11. Novel forms of colloidal self-organization in temporally and spatially varying external fields: from low-density network-forming fluids to spincoated crystals

    NASA Astrophysics Data System (ADS)

    Yethiraj, Anand

    2010-03-01

    External fields affect self-organization in Brownian colloidal suspensions in many different ways [1]. High-frequency time varying a.c. electric fields can induce effectively quasi-static dipolar inter-particle interactions. While dipolar interactions can provide access to multiple open equilibrium crystal structures [2] whose origin is now reasonably well understood, they can also give rise to competing interactions on short and long length scales that produce unexpected low-density ordered phases [3]. Farther from equilibrium, competing external fields are active in colloid spincoating. Drying colloidal suspensions on a spinning substrate produces a ``perfect polycrystal'' - tiny polycrystalline domains that exhibit long-range inter-domain orientational order [4] with resultant spectacular optical effects that are decoupled from single-crystallinity. High-speed movies of drying crystals yield insights into mechanisms of structure formation. Phenomena arising from multiple spatially- and temporally-varying external fields can give rise to further control of order and disorder, with potential application as patterned (photonic and magnetic) materials. [4pt] [1] A. Yethiraj, Soft Matter 3, 1099 (2007). [2] A. Yethiraj, A. van Blaaderen, Nature 421, 513 (2003). [3] A.K. Agarwal, A. Yethiraj, Phys. Rev. Lett ,102, 198301 (2009). [4] C. Arcos, K. Kumar, W. Gonz'alez-Viñas, R. Sirera, K. Poduska, A. Yethiraj, Phys. Rev. E ,77, 050402(R) (2008).

  12. Effect of additives on isothermal crystallization kinetics and physical characteristics of coconut oil.

    PubMed

    Chaleepa, Kesarin; Szepes, Anikó; Ulrich, Joachim

    2010-05-01

    The effect of lauric acid and low-HLB sucrose esters (L-195, S170) on the isothermal crystallization of coconut oil was investigated by differential scanning calorimetry. The fundamental crystallization parameters, such as induction time of nucleation and crystallization rate, were obtained by using the Gompertz equation. The Gibb's free energy of nucleation was calculated via the Fisher-Turnbull equation based on the equilibrium melting temperature. All additives, investigated in this work, proved to have an inhibition effect on nucleation and crystallization kinetics of coconut oil. Our results revealed that the inhibition effect is related to the dissimilarity of the molecular characteristics between coconut oil and the additives. The equilibrium melting temperature (T(m) degrees ) of the coconut oil-additive mixtures estimated by the Hoffman-Weeks method was decreased with the addition of lauric acid and increased by using sucrose esters as additives. Micrographs showing simultaneous crystallization of coconut oil and lauric acid indicated that strong molecular interaction led to the increase in lamellar thickness resulting in the T(m) degrees depression of coconut oil. The addition of L-195 modified the crystal morphology of coconut oil into large, dense, non-porous crystals without altering the polymorphic occurrence of coconut oil. The enhancement in lamellar thickness and crystal perfection supported the T(m) degrees elevation of coconut oil. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Bayesian demography 250 years after Bayes

    PubMed Central

    Bijak, Jakub; Bryant, John

    2016-01-01

    Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms. PMID:26902889

  14. A graphene Zener-Klein transistor cooled by a hyperbolic substrate

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Berthou, Simon; Lu, Xiaobo; Wilmart, Quentin; Denis, Anne; Rosticher, Michael; Taniguchi, Takashi; Watanabe, Kenji; Fève, Gwendal; Berroir, Jean-Marc; Zhang, Guangyu; Voisin, Christophe; Baudin, Emmanuel; Plaçais, Bernard

    2018-01-01

    The engineering of cooling mechanisms is a bottleneck in nanoelectronics. Thermal exchanges in diffusive graphene are mostly driven by defect-assisted acoustic phonon scattering, but the case of high-mobility graphene on hexagonal boron nitride (hBN) is radically different, with a prominent contribution of remote phonons from the substrate. Bilayer graphene on a hBN transistor with a local gate is driven in a regime where almost perfect current saturation is achieved by compensation of the decrease in the carrier density and Zener-Klein tunnelling (ZKT) at high bias. Using noise thermometry, we show that the ZKT triggers a new cooling pathway due to the emission of hyperbolic phonon polaritons in hBN by out-of-equilibrium electron-hole pairs beyond the super-Planckian regime. The combination of ZKT transport and hyperbolic phonon polariton cooling renders graphene on BN transistors a valuable nanotechnology for power devices and RF electronics.

  15. Entropic Lattice Boltzmann Simulations of Turbulence

    NASA Astrophysics Data System (ADS)

    Keating, Brian; Vahala, George; Vahala, Linda; Soe, Min; Yepez, Jeffrey

    2006-10-01

    Because of its simplicity, nearly perfect parallelization and vectorization on supercomputer platforms, lattice Boltzmann (LB) methods hold great promise for simulations of nonlinear physics. Indeed, our MHD-LB code has the best sustained performance/PE of any code on the Earth Simulator. By projecting into the higher dimensional kinetic phase space, the solution trajectory is simpler and much easier to compute than standard CFD approach. However, simple LB -- with its simple advection and local BGK collisional relaxation -- does not impose positive definiteness of the distribution functions in the time evolution. This leads to numerical instabilities for very low transport coefficients. In Entropic LB (ELB) one determines a discrete H-theorem and the equilibrium distribution functions subject to the collisional invariants. The ELB algorithm is unconditionally stable to arbitrary small transport coefficients. Various choices of velocity discretization are examined: 15, 19 and 27-bit ELB models. The connection between Tsallis and Boltzmann entropies are clarified.

  16. Daughters mimic sterile neutrinos (almost!) perfectly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasenkamp, Jasper, E-mail: Jasper.Hasenkamp@nyu.edu

    Since only recently, cosmological observations are sensitive to hot dark matter (HDM) admixtures with sub-eV mass, m{sub hdm}{sup eff} < eV, that are not fully-thermalised, Δ N{sub eff} < 1. We argue that their almost automatic interpretation as a sterile neutrino species is neither from theoretical nor practical parsimony principles preferred over HDM formed by decay products (daughters) of an out-of-equilibrium particle decay. While daughters mimic sterile neutrinos in N{sub eff} and m{sub hdm}{sup eff}, there are opportunities to assess this possibility in likelihood analyses. Connecting cosmological parameters and moments of momentum distribution functions, we show that—also in the case of mass-degenerate daughters with indistinguishablemore » main physical effects—the mimicry breaks down when the next moment, the skewness, is considered. Predicted differences of order one in the root-mean-squares of absolute momenta are too small for current sensitivities.« less

  17. Enhanced thermoelectric performance of defected silicene nanoribbons

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Guo, Z. X.; Zhang, Y.; Ding, J. W.; Zheng, X. J.

    2016-02-01

    Based on non-equilibrium Green's function method, we investigate the thermoelectric performance for both zigzag (ZSiNRs) and armchair (ASiNRs) silicene nanoribbons with central or edge defects. For perfect silicene nanoribbons (SiNRs), it is shown that with its width increasing, the maximum of ZT values (ZTM) decreases monotonously while the phononic thermal conductance increases linearly. For various types of edges and defects, with increasing defect numbers in longitudinal direction, ZTM increases monotonously while the phononic thermal conductance decreases. Comparing with ZSiNRs, defected ASiNRs possess higher thermoelectric performance due to higher Seebeck coefficient and lower thermal conductance. In particular, about 2.5 times enhancement to ZT values is obtained in ASiNRs with edge defects. Our theoretical simulations indicate that by controlling the type and number of defects, ZT values of SiNRs could be enhanced greatly which suggests their very appealing thermoelectric applications.

  18. Ups and downs of economics and econophysics — Facebook forecast

    NASA Astrophysics Data System (ADS)

    Gajic, Nenad; Budinski-Petkovic, Ljuba

    2013-01-01

    What is econophysics and its relationship with economics? What is the state of economics after the global economic crisis, and is there a future for the paradigm of market equilibrium, with imaginary perfect competition and rational agents? Can the next paradigm of economics adopt important assumptions derived from econophysics models: that markets are chaotic systems, striving to extremes as bubbles and crashes show, with psychologically motivated, statistically predictable individual behaviors? Is the future of econophysics, as predicted here, to disappear and become a part of economics? A good test of the current state of econophysics and its methods is the valuation of Facebook immediately after the initial public offering - this forecast indicates that Facebook is highly overvalued, and its IPO valuation of 104 billion dollars is mostly the new financial bubble based on the expectations of unlimited growth, although it’s easy to prove that Facebook is close to the upper limit of its users.

  19. Thermodynamics in variable speed of light theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Racker, Juan; Facultad de Ciencias Astronomicas y Geofisicas, Universidad Nacional de La Plata, Paseo del Bosque S/N; Sisterna, Pablo

    2009-10-15

    The perfect fluid in the context of a covariant variable speed of light theory proposed by J. Magueijo is studied. On the one hand the modified first law of thermodynamics together with a recipe to obtain equations of state are obtained. On the other hand the Newtonian limit is performed to obtain the nonrelativistic hydrostatic equilibrium equation for the theory. The results obtained are used to determine the time variation of the radius of Mercury induced by the variability of the speed of light (c), and the scalar contribution to the luminosity of white dwarfs. Using a bound for themore » change of that radius and combining it with an upper limit for the variation of the fine structure constant, a bound on the time variation of c is set. An independent bound is obtained from luminosity estimates for Stein 2015B.« less

  20. Hospital's activity-based financing system and manager-physician [corrected] interaction.

    PubMed

    Crainich, David; Leleu, Hervé; Mauleon, Ana

    2011-10-01

    This paper examines the consequences of the introduction of an activity-based reimbursement system on the behavior of physicians and hospital's managers. We consider a private for-profit sector where both hospitals and physicians are initially paid on a fee-for-service basis. We show that the benefit of the introduction of an activity-based system depends on the type of interaction between managers and physicians (simultaneous or sequential decision-making games). It is shown that, under the activity-based system, a sequential interaction with physician leader could be beneficial for both agents in the private sector. We further model an endogenous timing game à la Hamilton and Slutsky (Games Econ Behav 2: 29-46, 1990) in which the type of interaction is determined endogenously. We show that, under the activity-based system, the sequential interaction with physician leader is the unique subgame perfect equilibrium.

  1. Using an Informative Missing Data Model to Predict the Ability to Assess Recovery of Balance Control after Spaceflight

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Wood, Scott J.; Jain, Varsha

    2008-01-01

    Astronauts show degraded balance control immediately after spaceflight. To assess this change, astronauts' ability to maintain a fixed stance under several challenging stimuli on a movable platform is quantified by "equilibrium" scores (EQs) on a scale of 0 to 100, where 100 represents perfect control (sway angle of 0) and 0 represents data loss where no sway angle is observed because the subject has to be restrained from falling. By comparing post- to pre-flight EQs for actual astronauts vs. controls, we built a classifier for deciding when an astronaut has recovered. Future diagnostic performance depends both on the sampling distribution of the classifier as well as the distribution of its input data. Taking this into consideration, we constructed a predictive ROC by simulation after modeling P(EQ = 0) in terms of a latent EQ-like beta-distributed random variable with random effects.

  2. Elastic Green’s Function in Anisotropic Bimaterials Considering Interfacial Elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juan, Pierre -Alexandre; Dingreville, Remi

    Here, the two-dimensional elastic Green’s function is calculated for a general anisotropic elastic bimaterial containing a line dislocation and a concentrated force while accounting for the interfacial structure by means of a generalized interfacial elasticity paradigm. The introduction of the interface elasticity model gives rise to boundary conditions that are effectively equivalent to those of a weakly bounded interface. The equations of elastic equilibrium are solved by complex variable techniques and the method of analytical continuation. The solution is decomposed into the sum of the Green’s function corresponding to the perfectly bonded interface and a perturbation term corresponding to themore » complex coupling nature between the interface structure and a line dislocation/concentrated force. Such construct can be implemented into the boundary integral equations and the boundary element method for analysis of nano-layered structures and epitaxial systems where the interface structure plays an important role.« less

  3. Elastic Green’s Function in Anisotropic Bimaterials Considering Interfacial Elasticity

    DOE PAGES

    Juan, Pierre -Alexandre; Dingreville, Remi

    2017-09-13

    Here, the two-dimensional elastic Green’s function is calculated for a general anisotropic elastic bimaterial containing a line dislocation and a concentrated force while accounting for the interfacial structure by means of a generalized interfacial elasticity paradigm. The introduction of the interface elasticity model gives rise to boundary conditions that are effectively equivalent to those of a weakly bounded interface. The equations of elastic equilibrium are solved by complex variable techniques and the method of analytical continuation. The solution is decomposed into the sum of the Green’s function corresponding to the perfectly bonded interface and a perturbation term corresponding to themore » complex coupling nature between the interface structure and a line dislocation/concentrated force. Such construct can be implemented into the boundary integral equations and the boundary element method for analysis of nano-layered structures and epitaxial systems where the interface structure plays an important role.« less

  4. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  5. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  6. Adsorption characteristics of methylene blue onto agricultural wastes lotus leaf in bath and column modes.

    PubMed

    Han, Xiuli; Wang, Wei; Ma, Xiaojian

    2011-01-01

    The adsorption potential of lotus leaf to remove methylene blue (MB) from aqueous solution was investigated in batch and fixed-bed column experiments. Langmuir, Freundlich, Temkin and Koble-Corrigan isotherm models were employed to discuss the adsorption behavior. The results of analysis indicated that the equilibrium data were perfectly represented by Temkin isotherm and the Langmuir saturation adsorption capacity of lotus leaf was found to be 239.6 mg g(-1) at 303 K. In fixed-bed column experiments, the effects of flow rate, influent concentration and bed height on the breakthrough characteristics of adsorption were discussed. The Thomas and the bed-depth/service time (BDST) models were applied to the column experimental data to determine the characteristic parameters of the column adsorption. The two models were found to be suitable to describe the dynamic behavior of MB adsorbed onto the lotus leaf powder column.

  7. Creation of Spin-Triplet Cooper Pairs in the Absence of Magnetic Ordering

    NASA Astrophysics Data System (ADS)

    Breunig, Daniel; Burset, Pablo; Trauzettel, Björn

    2018-01-01

    In superconducting spintronics, it is essential to generate spin-triplet Cooper pairs on demand. Up to now, proposals to do so concentrate on hybrid structures in which a superconductor (SC) is combined with a magnetically ordered material (or an external magnetic field). We, instead, identify a novel way to create and isolate spin-triplet Cooper pairs in the absence of any magnetic ordering. This achievement is only possible because we drive a system with strong spin-orbit interaction—the Dirac surface states of a strong topological insulator (TI)-out of equilibrium. In particular, we consider a bipolar TI-SC-TI junction, where the electrochemical potentials in the outer leads differ in their overall sign. As a result, we find that nonlocal singlet pairing across the junction is completely suppressed for any excitation energy. Hence, this junction acts as a perfect spin-triplet filter across the SC, generating equal-spin Cooper pairs via crossed Andreev reflection.

  8. Creation of Spin-Triplet Cooper Pairs in the Absence of Magnetic Ordering.

    PubMed

    Breunig, Daniel; Burset, Pablo; Trauzettel, Björn

    2018-01-19

    In superconducting spintronics, it is essential to generate spin-triplet Cooper pairs on demand. Up to now, proposals to do so concentrate on hybrid structures in which a superconductor (SC) is combined with a magnetically ordered material (or an external magnetic field). We, instead, identify a novel way to create and isolate spin-triplet Cooper pairs in the absence of any magnetic ordering. This achievement is only possible because we drive a system with strong spin-orbit interaction-the Dirac surface states of a strong topological insulator (TI)-out of equilibrium. In particular, we consider a bipolar TI-SC-TI junction, where the electrochemical potentials in the outer leads differ in their overall sign. As a result, we find that nonlocal singlet pairing across the junction is completely suppressed for any excitation energy. Hence, this junction acts as a perfect spin-triplet filter across the SC, generating equal-spin Cooper pairs via crossed Andreev reflection.

  9. Adiabatic Expansion of Electron Gas in a Magnetic Nozzle.

    PubMed

    Takahashi, Kazunori; Charles, Christine; Boswell, Rod; Ando, Akira

    2018-01-26

    A specially constructed experiment shows the near perfect adiabatic expansion of an ideal electron gas resulting in a polytropic index greater than 1.4, approaching the adiabatic value of 5/3, when removing electric fields from the system, while the polytropic index close to unity is observed when the electrons are trapped by the electric fields. The measurements were made on collisionless electrons in an argon plasma expanding in a magnetic nozzle. The collision lengths of all electron collision processes are greater than the scale length of the expansion, meaning the system cannot be in thermodynamic equilibrium, yet thermodynamic concepts can be used, with caution, in explaining the results. In particular, a Lorentz force, created by inhomogeneities in the radial plasma density, does work on the expanding magnetic field, reducing the internal energy of the electron gas that behaves as an adiabatically expanding ideal gas.

  10. Adiabatic Expansion of Electron Gas in a Magnetic Nozzle

    NASA Astrophysics Data System (ADS)

    Takahashi, Kazunori; Charles, Christine; Boswell, Rod; Ando, Akira

    2018-01-01

    A specially constructed experiment shows the near perfect adiabatic expansion of an ideal electron gas resulting in a polytropic index greater than 1.4, approaching the adiabatic value of 5 /3 , when removing electric fields from the system, while the polytropic index close to unity is observed when the electrons are trapped by the electric fields. The measurements were made on collisionless electrons in an argon plasma expanding in a magnetic nozzle. The collision lengths of all electron collision processes are greater than the scale length of the expansion, meaning the system cannot be in thermodynamic equilibrium, yet thermodynamic concepts can be used, with caution, in explaining the results. In particular, a Lorentz force, created by inhomogeneities in the radial plasma density, does work on the expanding magnetic field, reducing the internal energy of the electron gas that behaves as an adiabatically expanding ideal gas.

  11. Steady-State Density Functional Theory for Finite Bias Conductances.

    PubMed

    Stefanucci, G; Kurth, S

    2015-12-09

    In the framework of density functional theory, a formalism to describe electronic transport in the steady state is proposed which uses the density on the junction and the steady current as basic variables. We prove that, in a finite window around zero bias, there is a one-to-one map between the basic variables and both local potential on as well as bias across the junction. The resulting Kohn-Sham system features two exchange-correlation (xc) potentials, a local xc potential, and an xc contribution to the bias. For weakly coupled junctions the xc potentials exhibit steps in the density-current plane which are shown to be crucial to describe the Coulomb blockade diamonds. At small currents these steps emerge as the equilibrium xc discontinuity bifurcates. The formalism is applied to a model benzene junction, finding perfect agreement with the orthodox theory of Coulomb blockade.

  12. Pigouvian taxation of energy for flow and stock externalities and strategic, noncompetitive energy pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wirl, F.

    1994-01-01

    The literature on energy and carbon taxes is by and large concerned about the derivation of (globally) efficient strategies. In contrast, this paper considers the dynamic interactions between cartelized energy suppliers and a consumers' government that collectively taxes energy carriers for Pigouvian motives. Two different kinds of external costs are associated with energy consumption: flow (e.g., acid rain) and stock externalities (e.g., global warming). The dynamic interactions between a consumers' government and a producers' cartel are modeled as a differential game with a subgame perfect Nash equilibrium in linear and nonlinear Markov strategies. The major implications are that the nonlinearmore » solutions are Pareto-inferior to the linear strategies and energy suppliers may preempt energy taxation and thereby may raise the price at front; however, this effect diminishes over time because the producers' price declines, while taxes increase. 22 refs., 5 figs., 1 tab.« less

  13. A stochastic equilibrium model for the North American natural gas market

    NASA Astrophysics Data System (ADS)

    Zhuang, Jifang

    This dissertation is an endeavor in the field of energy modeling for the North American natural gas market using a mixed complementarity formulation combined with the stochastic programming. The genesis of the stochastic equilibrium model presented in this dissertation is the deterministic market equilibrium model developed in [Gabriel, Kiet and Zhuang, 2005]. Based on some improvements that we made to this model, including proving new existence and uniqueness results, we present a multistage stochastic equilibrium model with uncertain demand for the deregulated North American natural gas market using the recourse method of the stochastic programming. The market participants considered by the model are pipeline operators, producers, storage operators, peak gas operators, marketers and consumers. Pipeline operators are described with regulated tariffs but also involve "congestion pricing" as a mechanism to allocate scarce pipeline capacity. Marketers are modeled as Nash-Cournot players in sales to the residential and commercial sectors but price-takers in all other aspects. Consumers are represented by demand functions in the marketers' problem. Producers, storage operators and peak gas operators are price-takers consistent with perfect competition. Also, two types of the natural gas markets are included: the long-term and spot markets. Market participants make both high-level planning decisions (first-stage decisions) in the long-term market and daily operational decisions (recourse decisions) in the spot market subject to their engineering, resource and political constraints, resource constraints as well as market constraints on both the demand and the supply side, so as to simultaneously maximize their expected profits given others' decisions. The model is shown to be an instance of a mixed complementarity problem (MiCP) under minor conditions. The MiCP formulation is derived from applying the Karush-Kuhn-Tucker optimality conditions of the optimization problems faced by the market participants. Some theoretical results regarding the market prices in both markets are shown. We also illustrate the model on a representative, sample network of two production nodes, two consumption nodes with discretely distributed end-user demand and three seasons using four cases.

  14. Model Diagnostics for Bayesian Networks

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  15. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

    PubMed Central

    van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B; Neyer, Franz J; van Aken, Marcel AG

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided. PMID:24116396

  16. Metal/silicate partitioning of Pt and the origin of the "late veneer"

    NASA Astrophysics Data System (ADS)

    Ertel, W.; Walter, M. J.; Drake, M. J.; Sylvester, P. J.

    2002-12-01

    Highly siderophile elements (HSEs) are perfect tools for investigating core forming processes in planetary bodies due to their Fe-loving (siderophile) geochemical behavior. Tremendous scientific effort was invested into this field during the past 10 years - mostly in 1 atm experiments. However, little is known about their high-pressure geochemistry and partitioning behavior between core and mantle forming phases. This knowledge is essential to distinguish between equilibrium (Magma Ocean) and non-equilibrium (heterogeneous accretion, late veneer) models for the accretion history for the early Earth. We therefore chose to investigate the partitioning behavior of Pt up to pressures of 140 kbar (14 GPa) and temperatures of 1950°C. The used melt composition - identical to melt systems used in 1 atm experiments - is the eutectic composition of Anorthite-Diopside (AnDi), a pseudo-basalt. A series of runs were performed which were internaly buffered by the piston cylinder apparatus, and were followed by duplicate experiments buffered in the AnDi-C-CO2 system. These experiments constitute reversals since they approach equilibrium from an initially higher and lower Pt solubility (8 ppm in the non-buffered runs, and essentially Pt free in the buffered runs). Experimental charges were encapsulated in Pt capsules which served as source for Pt. Experiments up to 20 kbar were performed in a Quickpress piston cylinder apparatus, while experiments at higher pressures were performed in a Walker-type (Tucson, AZ) and a Kawai-type (Misasa, Japan) multi anvil apparatus. Time series experiments were performed in piston-cylinder runs to determine minimum run durations for the achievement of equilibrium, and to guarantee high-quality partitioning data. 6 hours was found to be sufficient to obtain equilibrium. In practice, all experiments exceeded 12 hours to assure equilibrium. In a second set of runs the temperature dependence of the partitioning behavior of Pt was investigated between the melting point of the 1 atm, AnDi system and the melting point of the Pt capsule material. Over 150 piston cylinder and 12 multi anvil experiments have been performed. Pt solubility is only slightly dependent on temperature, decreasing between 1800 and 1400°C by less than an order of magnitude. In consequence, the partitioning behavior of Pt is mostly determined by its oxygen fugacity dependence, which has only been determined in 1 atm experiments. At 10 kbar, metal/silicate partition coefficients (D's) decrease by about 3 orders of magnitude. The reason for this is not understood, but might be attributed to a first order phase transition as found for, e.g., SiO2 or H2O. Above 10 kbar any increase in pressure does not lead to any further significant decrease in partition coefficients. Solubilities stay roughly constant up to 140 kbar. Abundances of moderately siderophile elements were possibly established by metal/silicate equilibrium in a magma ocean. These results for Pt suggest that the abundances of HSEs were most probably established by the accretion of a chondritic veneer following core formation, as metal/silicate partition coefficients are too high to be consistent with metal/silicate equilibrium in a magma ocean.

  17. Bayesian Modeling of Exposure and Airflow Using Two-Zone Models

    PubMed Central

    Zhang, Yufen; Banerjee, Sudipto; Yang, Rui; Lungu, Claudiu; Ramachandran, Gurumurthy

    2009-01-01

    Mathematical modeling is being increasingly used as a means for assessing occupational exposures. However, predicting exposure in real settings is constrained by lack of quantitative knowledge of exposure determinants. Validation of models in occupational settings is, therefore, a challenge. Not only do the model parameters need to be known, the models also need to predict the output with some degree of accuracy. In this paper, a Bayesian statistical framework is used for estimating model parameters and exposure concentrations for a two-zone model. The model predicts concentrations in a zone near the source and far away from the source as functions of the toluene generation rate, air ventilation rate through the chamber, and the airflow between near and far fields. The framework combines prior or expert information on the physical model along with the observed data. The framework is applied to simulated data as well as data obtained from the experiments conducted in a chamber. Toluene vapors are generated from a source under different conditions of airflow direction, the presence of a mannequin, and simulated body heat of the mannequin. The Bayesian framework accounts for uncertainty in measurement as well as in the unknown rate of airflow between the near and far fields. The results show that estimates of the interzonal airflow are always close to the estimated equilibrium solutions, which implies that the method works efficiently. The predictions of near-field concentration for both the simulated and real data show nice concordance with the true values, indicating that the two-zone model assumptions agree with the reality to a large extent and the model is suitable for predicting the contaminant concentration. Comparison of the estimated model and its margin of error with the experimental data thus enables validation of the physical model assumptions. The approach illustrates how exposure models and information on model parameters together with the knowledge of uncertainty and variability in these quantities can be used to not only provide better estimates of model outputs but also model parameters. PMID:19403840

  18. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

    ERIC Educational Resources Information Center

    van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A. G.

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are…

  19. Bayesian correction for covariate measurement error: A frequentist evaluation and comparison with regression calibration.

    PubMed

    Bartlett, Jonathan W; Keogh, Ruth H

    2018-06-01

    Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.

  20. Subjective value of risky foods for individual domestic chicks: a hierarchical Bayesian model.

    PubMed

    Kawamori, Ai; Matsushima, Toshiya

    2010-05-01

    For animals to decide which prey to attack, the gain and delay of the food item must be integrated in a value function. However, the subjective value is not obtained by expected profitability when it is accompanied by risk. To estimate the subjective value, we examined choices in a cross-shaped maze with two colored feeders in domestic chicks. When tested by a reversal in food amount or delay, chicks changed choices similarly in both conditions (experiment 1). We therefore examined risk sensitivity for amount and delay (experiment 2) by supplying one feeder with food of fixed profitability and the alternative feeder with high- or low-profitability food at equal probability. Profitability varied in amount (groups 1 and 2 at high and low variance) or in delay (group 3). To find the equilibrium, the amount (groups 1 and 2) or delay (group 3) of the food in the fixed feeder was adjusted in a total of 18 blocks. The Markov chain Monte Carlo method was applied to a hierarchical Bayesian model to estimate the subjective value. Chicks undervalued the variable feeder in group 1 and were indifferent in group 2 but overvalued the variable feeder in group 3 at a population level. Re-examination without the titration procedure (experiment 3) suggested that the subjective value was not absolute for each option. When the delay was varied, the variable option was often given a paradoxically high value depending on fixed alternative. Therefore, the basic assumption of the uniquely determined value function might be questioned.

  1. Cap-and-Trade Modeling and Analysis: Congested Electricity Market Equilibrium

    NASA Astrophysics Data System (ADS)

    Limpaitoon, Tanachai

    This dissertation presents an equilibrium framework for analyzing the impact of cap-and-trade regulation on transmission-constrained electricity market. The cap-and-trade regulation of greenhouse gas emissions has gained momentum in the past decade. The impact of the regulation and its efficacy in the electric power industry depend on interactions of demand elasticity, transmission network, market structure, and strategic behavior of firms. I develop an equilibrium model of an oligopoly electricity market in conjunction with a market for tradable emissions permits to study the implications of such interactions. My goal is to identify inefficiencies that may arise from policy design elements and to avoid any unintended adverse consequences on the electric power sector. I demonstrate this modeling framework with three case studies examining the impact of carbon cap-and-trade regulation. In the first case study, I study equilibrium results under various scenarios of resource ownership and emission targets using a 24-bus IEEE electric transmission system. The second and third case studies apply the equilibrium model to a realistic electricity market, Western Electricity Coordinating Council (WECC) 225-bus system with a detailed representation of the California market. In the first and second case studies, I examine oligopoly in electricity with perfect competition in the permit market. I find that under a stringent emission cap and a high degree of concentration of non-polluting firms, the electricity market is subject to potential abuses of market power. Also, market power can occur in the procurement of non-polluting energy through the permit market when non-polluting resources are geographically concentrated in a transmission-constrained market. In the third case study, I relax the competitive market structure assumption of the permit market by allowing oligopolistic competition in the market through a conjectural variation approach. A short-term equilibrium analysis of the joint markets in the presence of market power reveals that strategic permit trading can play a vital role in determining economic outcomes in the electricity market. In particular, I find that a firm with more efficient technologies can employ strategic withholding of permits, which allows for its increase in output share in the electricity market at the expense of other less efficient firms. In addition, strategic permit trading can influence patterns of transmission congestion. These results illustrate that market structure and transmission congestion can have a significant impact on the market performance and environmental outcome of the regulation while the interactions of such factors can lead to unintended consequences. The proposed approach is proven useful as a tool for market monitoring purposes in the short run from the perspective of a system operator, whose responsibility has become indirectly intertwined with emission trading regulation.

  2. The current state of Bayesian methods in medical product development: survey results and recommendations from the DIA Bayesian Scientific Working Group.

    PubMed

    Natanegara, Fanni; Neuenschwander, Beat; Seaman, John W; Kinnersley, Nelson; Heilmann, Cory R; Ohlssen, David; Rochester, George

    2014-01-01

    Bayesian applications in medical product development have recently gained popularity. Despite many advances in Bayesian methodology and computations, increase in application across the various areas of medical product development has been modest. The DIA Bayesian Scientific Working Group (BSWG), which includes representatives from industry, regulatory agencies, and academia, has adopted the vision to ensure Bayesian methods are well understood, accepted more broadly, and appropriately utilized to improve decision making and enhance patient outcomes. As Bayesian applications in medical product development are wide ranging, several sub-teams were formed to focus on various topics such as patient safety, non-inferiority, prior specification, comparative effectiveness, joint modeling, program-wide decision making, analytical tools, and education. The focus of this paper is on the recent effort of the BSWG Education sub-team to administer a Bayesian survey to statisticians across 17 organizations involved in medical product development. We summarize results of this survey, from which we provide recommendations on how to accelerate progress in Bayesian applications throughout medical product development. The survey results support findings from the literature and provide additional insight on regulatory acceptance of Bayesian methods and information on the need for a Bayesian infrastructure within an organization. The survey findings support the claim that only modest progress in areas of education and implementation has been made recently, despite substantial progress in Bayesian statistical research and software availability. Copyright © 2013 John Wiley & Sons, Ltd.

  3. On the Adequacy of Bayesian Evaluations of Categorization Models: Reply to Vanpaemel and Lee (2012)

    ERIC Educational Resources Information Center

    Wills, Andy J.; Pothos, Emmanuel M.

    2012-01-01

    Vanpaemel and Lee (2012) argued, and we agree, that the comparison of formal models can be facilitated by Bayesian methods. However, Bayesian methods neither precede nor supplant our proposals (Wills & Pothos, 2012), as Bayesian methods can be applied both to our proposals and to their polar opposites. Furthermore, the use of Bayesian methods to…

  4. Moving beyond qualitative evaluations of Bayesian models of cognition.

    PubMed

    Hemmer, Pernille; Tauber, Sean; Steyvers, Mark

    2015-06-01

    Bayesian models of cognition provide a powerful way to understand the behavior and goals of individuals from a computational point of view. Much of the focus in the Bayesian cognitive modeling approach has been on qualitative model evaluations, where predictions from the models are compared to data that is often averaged over individuals. In many cognitive tasks, however, there are pervasive individual differences. We introduce an approach to directly infer individual differences related to subjective mental representations within the framework of Bayesian models of cognition. In this approach, Bayesian data analysis methods are used to estimate cognitive parameters and motivate the inference process within a Bayesian cognitive model. We illustrate this integrative Bayesian approach on a model of memory. We apply the model to behavioral data from a memory experiment involving the recall of heights of people. A cross-validation analysis shows that the Bayesian memory model with inferred subjective priors predicts withheld data better than a Bayesian model where the priors are based on environmental statistics. In addition, the model with inferred priors at the individual subject level led to the best overall generalization performance, suggesting that individual differences are important to consider in Bayesian models of cognition.

  5. General multi-group macroscopic modeling for thermo-chemical non-equilibrium gas mixtures.

    PubMed

    Liu, Yen; Panesi, Marco; Sahai, Amal; Vinokur, Marcel

    2015-04-07

    This paper opens a new door to macroscopic modeling for thermal and chemical non-equilibrium. In a game-changing approach, we discard conventional theories and practices stemming from the separation of internal energy modes and the Landau-Teller relaxation equation. Instead, we solve the fundamental microscopic equations in their moment forms but seek only optimum representations for the microscopic state distribution function that provides converged and time accurate solutions for certain macroscopic quantities at all times. The modeling makes no ad hoc assumptions or simplifications at the microscopic level and includes all possible collisional and radiative processes; it therefore retains all non-equilibrium fluid physics. We formulate the thermal and chemical non-equilibrium macroscopic equations and rate coefficients in a coupled and unified fashion for gases undergoing completely general transitions. All collisional partners can have internal structures and can change their internal energy states after transitions. The model is based on the reconstruction of the state distribution function. The internal energy space is subdivided into multiple groups in order to better describe non-equilibrium state distributions. The logarithm of the distribution function in each group is expressed as a power series in internal energy based on the maximum entropy principle. The method of weighted residuals is applied to the microscopic equations to obtain macroscopic moment equations and rate coefficients succinctly to any order. The model's accuracy depends only on the assumed expression of the state distribution function and the number of groups used and can be self-checked for accuracy and convergence. We show that the macroscopic internal energy transfer, similar to mass and momentum transfers, occurs through nonlinear collisional processes and is not a simple relaxation process described by, e.g., the Landau-Teller equation. Unlike the classical vibrational energy relaxation model, which can only be applied to molecules, the new model is applicable to atoms, molecules, ions, and their mixtures. Numerical examples and model validations are carried out with two gas mixtures using the maximum entropy linear model: one mixture consists of nitrogen molecules undergoing internal excitation and dissociation and the other consists of nitrogen atoms undergoing internal excitation and ionization. Results show that the original hundreds to thousands of microscopic equations can be reduced to two macroscopic equations with almost perfect agreement for the total number density and total internal energy using only one or two groups. We also obtain good prediction of the microscopic state populations using 5-10 groups in the macroscopic equations.

  6. Bayesian Validation of the Indirect Immunofluorescence Assay and Its Superiority to the Enzyme-Linked Immunosorbent Assay and the Complement Fixation Test for Detecting Antibodies against Coxiella burnetii in Goat Serum.

    PubMed

    Muleme, Michael; Stenos, John; Vincent, Gemma; Campbell, Angus; Graves, Stephen; Warner, Simone; Devlin, Joanne M; Nguyen, Chelsea; Stevenson, Mark A; Wilks, Colin R; Firestone, Simon M

    2016-06-01

    Although many studies have reported the indirect immunofluorescence assay (IFA) to be more sensitive in detection of antibodies to Coxiella burnetii than the complement fixation test (CFT), the diagnostic sensitivity (DSe) and diagnostic specificity (DSp) of the assay have not been previously established for use in ruminants. This study aimed to validate the IFA by describing the optimization, selection of cutoff titers, repeatability, and reliability as well as the DSe and DSp of the assay. Bayesian latent class analysis was used to estimate diagnostic specifications in comparison with the CFT and the enzyme-linked immunosorbent assay (ELISA). The optimal cutoff dilution for screening for IgG and IgM antibodies in goat serum using the IFA was estimated to be 1:160. The IFA had good repeatability (>96.9% for IgG, >78.0% for IgM), and there was almost perfect agreement (Cohen's kappa > 0.80 for IgG) between the readings reported by two technicians for samples tested for IgG antibodies. The IFA had a higher DSe (94.8%; 95% confidence interval [CI], 80.3, 99.6) for the detection of IgG antibodies against C. burnetii than the ELISA (70.1%; 95% CI, 52.7, 91.0) and the CFT (29.8%; 95% CI, 17.0, 44.8). All three tests were highly specific for goat IgG antibodies. The IFA also had a higher DSe (88.8%; 95% CI, 58.2, 99.5) for detection of IgM antibodies than the ELISA (71.7%; 95% CI, 46.3, 92.8). These results underscore the better suitability of the IFA than of the CFT and ELISA for detection of IgG and IgM antibodies in goat serum and possibly in serum from other ruminants. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  7. Using automatically extracted information from mammography reports for decision-support

    PubMed Central

    Bozkurt, Selen; Gimenez, Francisco; Burnside, Elizabeth S.; Gulkesen, Kemal H.; Rubin, Daniel L.

    2016-01-01

    Objective To evaluate a system we developed that connects natural language processing (NLP) for information extraction from narrative text mammography reports with a Bayesian network for decision-support about breast cancer diagnosis. The ultimate goal of this system is to provide decision support as part of the workflow of producing the radiology report. Materials and methods We built a system that uses an NLP information extraction system (which extract BI-RADS descriptors and clinical information from mammography reports) to provide the necessary inputs to a Bayesian network (BN) decision support system (DSS) that estimates lesion malignancy from BI-RADS descriptors. We used this integrated system to predict diagnosis of breast cancer from radiology text reports and evaluated it with a reference standard of 300 mammography reports. We collected two different outputs from the DSS: (1) the probability of malignancy and (2) the BI-RADS final assessment category. Since NLP may produce imperfect inputs to the DSS, we compared the difference between using perfect (“reference standard”) structured inputs to the DSS (“RS-DSS”) vs NLP-derived inputs (“NLP-DSS”) on the output of the DSS using the concordance correlation coefficient. We measured the classification accuracy of the BI-RADS final assessment category when using NLP-DSS, compared with the ground truth category established by the radiologist. Results The NLP-DSS and RS-DSS had closely matched probabilities, with a mean paired difference of 0.004 ± 0.025. The concordance correlation of these paired measures was 0.95. The accuracy of the NLP-DSS to predict the correct BI-RADS final assessment category was 97.58%. Conclusion The accuracy of the information extracted from mammography reports using the NLP system was sufficient to provide accurate DSS results. We believe our system could ultimately reduce the variation in practice in mammography related to assessment of malignant lesions and improve management decisions. PMID:27388877

  8. Bayesian Validation of the Indirect Immunofluorescence Assay and Its Superiority to the Enzyme-Linked Immunosorbent Assay and the Complement Fixation Test for Detecting Antibodies against Coxiella burnetii in Goat Serum

    PubMed Central

    Stenos, John; Vincent, Gemma; Campbell, Angus; Graves, Stephen; Warner, Simone; Devlin, Joanne M.; Nguyen, Chelsea; Stevenson, Mark A.; Wilks, Colin R.; Firestone, Simon M.

    2016-01-01

    Although many studies have reported the indirect immunofluorescence assay (IFA) to be more sensitive in detection of antibodies to Coxiella burnetii than the complement fixation test (CFT), the diagnostic sensitivity (DSe) and diagnostic specificity (DSp) of the assay have not been previously established for use in ruminants. This study aimed to validate the IFA by describing the optimization, selection of cutoff titers, repeatability, and reliability as well as the DSe and DSp of the assay. Bayesian latent class analysis was used to estimate diagnostic specifications in comparison with the CFT and the enzyme-linked immunosorbent assay (ELISA). The optimal cutoff dilution for screening for IgG and IgM antibodies in goat serum using the IFA was estimated to be 1:160. The IFA had good repeatability (>96.9% for IgG, >78.0% for IgM), and there was almost perfect agreement (Cohen's kappa > 0.80 for IgG) between the readings reported by two technicians for samples tested for IgG antibodies. The IFA had a higher DSe (94.8%; 95% confidence interval [CI], 80.3, 99.6) for the detection of IgG antibodies against C. burnetii than the ELISA (70.1%; 95% CI, 52.7, 91.0) and the CFT (29.8%; 95% CI, 17.0, 44.8). All three tests were highly specific for goat IgG antibodies. The IFA also had a higher DSe (88.8%; 95% CI, 58.2, 99.5) for detection of IgM antibodies than the ELISA (71.7%; 95% CI, 46.3, 92.8). These results underscore the better suitability of the IFA than of the CFT and ELISA for detection of IgG and IgM antibodies in goat serum and possibly in serum from other ruminants. PMID:27122484

  9. Electronic and transport properties of zigzag carbon nanotubes with the presence of periodical antidot and boron/nitride doping defects

    NASA Astrophysics Data System (ADS)

    Zoghi, Milad; Yazdanpanah Goharrizi, Arash; Mirjalili, Seyed Mohammad; Kabir, M. Z.

    2018-06-01

    Electronic and transport properties of Carbon nanotubes (CNTs) are affected by the presence of physical or chemical defects in their structures. In this paper, we present novel platforms of defected zigzag CNTs (Z-CNTs) in which two topologies of antidot and Boron/Nitride (BN) doping defects are periodically imposed throughout the length of perfect tubes. Using the tight binding model and the non-equilibrium Green’s function method, it is realized that the quantum confinement of Z-CNTs is modified by the presence of such defects. This new quantum confinement results in the appearance of mini bands and mini gaps in the transmission spectra, as well as a modified band structure and band gap size. The modified band gap could be either larger or smaller than the intrinsic band gap of a perfect tube, which is determined by the category of Z-CNT. The in-depth analysis shows that the size of the modified band gap is the function of several factors consisting of: the radii of tube (D r), the distance between adjacent defects (d d), the utilized defect topology, and the kind of defect (antidot or BN doping). Furthermore, taking advantage of the tunable band gap size of Z-CNT with the presence of periodical defects, new platforms of defect-based Z-CNT resonant tunneling diode (RTD) are proposed for the first time. Our calculations demonstrate the apparition of resonances in transmission spectra and the negative differential resistance in the I-V characteristics for such RTD platforms.

  10. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  11. Taking Costs and Diagnostic Test Accuracy into Account When Designing Prevalence Studies: An Application to Childhood Tuberculosis Prevalence.

    PubMed

    Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence

    2017-11-01

    When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.

  12. Accounting for imperfect detection of groups and individuals when estimating abundance.

    PubMed

    Clement, Matthew J; Converse, Sarah J; Royle, J Andrew

    2017-09-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  13. Accounting for imperfect detection of groups and individuals when estimating abundance

    USGS Publications Warehouse

    Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew

    2017-01-01

    If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.

  14. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    PubMed

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  15. Influence of alkane and perfluorocarbon vapors on adsorbed surface layers and spread insoluble monolayers of surfactants, proteins and lipids.

    PubMed

    Fainerman, V B; Aksenenko, E V; Miller, R

    2017-06-01

    The influence of hexane vapor in the air atmosphere on the surface tension of water and solutions of C 10 EO 8 , C n TAB and proteins are presented. For dry air, a fast and strong decrease of surface tension of water was observed. In humid air, the process is slower and the surface tension higher. There are differences between the results obtained by the maximum bubble pressure, pendant drop and emerging bubble methods, which are discussed in terms of depletion and initial surface load. The surface tension of aqueous solutions of β-сasein (BCS), β-lactoglobulin (BLG) and human serum albumin (HSA) at the interfaces with air and air-saturated hexane vapor were measured. The results indicate that the equilibrium surface tension in the hexane vapor atmosphere is considerably lower (at 13-20mN/m) as compared to the values at the interface with pure air. A reorientation model is proposed assuming several states of adsorbed molecules with different molar area values. The newly developed theoretical model is used to describe the effect of alkane vapor in the gas phase on the surface tension. This model assumes that the first layer is composed of surfactant (or protein) molecules mixed with alkane, and the second layer is formed by alkane molecules only. The processing of the experimental data for the equilibrium surface tension for the C 10 EO 8 and BCS solutions results in a perfect agreement between the observed and calculated values. The co-adsorption mechanism of dipalmitoyl phosphatidyl choline (DPPC) and the fluorocarbon molecules leads to remarkable differences in the surface pressure term of cohesion Π coh . This in turn leads to a very efficient fluidization of the monolayer. It was found that the adsorption equilibrium constant for dioctanoyl phosphatidyl choline is increased in the presence of perfluorohexane, and the intermolecular interaction of the components is strong. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. On gamesmen and fair men: explaining fairness in non-cooperative bargaining games.

    PubMed

    Suleiman, Ramzi

    2018-02-01

    Experiments on bargaining games have repeatedly shown that subjects fail to use backward induction, and that they only rarely make demands in accordance with the subgame perfect equilibrium. In a recent paper, we proposed an alternative model, termed 'economic harmony' in which we modified the individual's utility by defining it as a function of the ratio between the actual and aspired pay-offs. We also abandoned the notion of equilibrium, in favour of a new notion of 'harmony', defined as the intersection of strategies, at which all players are equally satisfied. We showed that the proposed model yields excellent predictions of offers in the ultimatum game, and requests in the sequential common pool resource dilemma game. Strikingly, the predicted demand in the ultimatum game is equal to the famous Golden Ratio (approx. 0.62 of the entire pie). The same prediction was recently derived independently by Schuster (Schuster 2017. Sci. Rep. 7 , 5642). In this paper, we extend the solution to bargaining games with alternating offers. We show that the derived solution predicts the opening demands reported in several experiments, on games with equal and unequal discount factors and game horizons. Our solution also predicts several unexplained findings, including the puzzling 'disadvantageous counter-offers', and the insensitivity of opening demands to variations in the players' discount factors, and game horizon. Strikingly, we find that the predicted opening demand in the alternating offers game is also equal to the Golden Ratio.

  17. On gamesmen and fair men: explaining fairness in non-cooperative bargaining games

    PubMed Central

    2018-01-01

    Experiments on bargaining games have repeatedly shown that subjects fail to use backward induction, and that they only rarely make demands in accordance with the subgame perfect equilibrium. In a recent paper, we proposed an alternative model, termed ‘economic harmony’ in which we modified the individual's utility by defining it as a function of the ratio between the actual and aspired pay-offs. We also abandoned the notion of equilibrium, in favour of a new notion of ‘harmony’, defined as the intersection of strategies, at which all players are equally satisfied. We showed that the proposed model yields excellent predictions of offers in the ultimatum game, and requests in the sequential common pool resource dilemma game. Strikingly, the predicted demand in the ultimatum game is equal to the famous Golden Ratio (approx. 0.62 of the entire pie). The same prediction was recently derived independently by Schuster (Schuster 2017. Sci. Rep. 7, 5642). In this paper, we extend the solution to bargaining games with alternating offers. We show that the derived solution predicts the opening demands reported in several experiments, on games with equal and unequal discount factors and game horizons. Our solution also predicts several unexplained findings, including the puzzling ‘disadvantageous counter-offers’, and the insensitivity of opening demands to variations in the players' discount factors, and game horizon. Strikingly, we find that the predicted opening demand in the alternating offers game is also equal to the Golden Ratio. PMID:29515877

  18. Local conditions separating expansion from collapse in spherically symmetric models with anisotropic pressures

    NASA Astrophysics Data System (ADS)

    Mimoso, José P.; Le Delliou, Morgan; Mena, Filipe C.

    2013-08-01

    We investigate spherically symmetric spacetimes with an anisotropic fluid and discuss the existence and stability of a separating shell dividing expanding and collapsing regions. We resort to a 3+1 splitting and obtain gauge invariant conditions relating intrinsic spacetime quantities to properties of the matter source. We find that the separating shell is defined by a generalization of the Tolman-Oppenheimer-Volkoff equilibrium condition. The latter establishes a balance between the pressure gradients, both isotropic and anisotropic, and the strength of the fields induced by the Misner-Sharp mass inside the separating shell and by the pressure fluxes. This defines a local equilibrium condition, but conveys also a nonlocal character given the definition of the Misner-Sharp mass. By the same token, it is also a generalized thermodynamical equation of state as usually interpreted for the perfect fluid case, which now has the novel feature of involving both the isotropic and the anisotropic stresses. We have cast the governing equations in terms of local, gauge invariant quantities that are revealing of the role played by the anisotropic pressures and inhomogeneous electric part of the Weyl tensor. We analyze a particular solution with dust and radiation that provides an illustration of our conditions. In addition, our gauge invariant formalism not only encompasses the cracking process from Herrera and co-workers but also reveals transparently the interplay and importance of the shear and of the anisotropic stresses.

  19. Description of intraoral pressures on sub-palatal space in young adult patients with normal occlusion.

    PubMed

    Fuentes, Ramón; Engelke, Wilfried; Flores, Tania; Navarro, Pablo; Borie, Eduardo; Curiqueo, Aldo; Salamanca, Carlos

    2015-01-01

    Under normal conditions, the oral cavity presents a perfect system of equilibrium between teeth, soft tissues and tongue. The equilibrium of soft tissues forms a closed capsular matrix, generating differences with the atmospheric environment. This difference is known as intraoral pressure. Negative intraoral pressure is fundamental to the stabilization of the soft palate and tongue, reducing neuromuscular activity for the permeability of the respiratory tract. Thus, the aim of this study was to describe the variations of intraoral pressure of the sub-palatal space (SPS) under different physiological conditions and biofunctional phases. A case series was conducted with 20 individuals aged between 18 and 25. The intraoral pressures were measured through a system of cannulae connected to a digital pressure meter in the SPS during seven biofunctional phases. Descriptive statistics were used based on the mean and standard deviation. The data recorded pressure variations under physiological conditions, reaching 65 mbar as the intraoral peak in forced inspiration. In the swallowing phase, peaks reached -91.9 mbar. No pressure variations were recorded in terms of atmospheric changes with the mouth open and semi-open. The data obtained during the swallowing and forced inspiration phases indicated forced lingual activity. In the swallowing phase, the adequate position of the tongue creates negative intraoral pressure, which represents a fundamental mechanism for the physical stabilization of the soft palate. This information could contribute to subsequent research into the treatment of primary roncopathies.

  20. Development and comparison of Bayesian modularization method in uncertainty assessment of hydrological models

    NASA Astrophysics Data System (ADS)

    Li, L.; Xu, C.-Y.; Engeland, K.

    2012-04-01

    With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD

  1. Piloted Simulator Evaluation of Maneuvering Envelope Information for Flight Crew Awareness

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas; Schuet, Stefan; Acosta, Diana; Kaneshige, John; Shish, Kimberlee; Martin, Lynne

    2015-01-01

    The implementation and evaluation of an efficient method for estimating safe aircraft maneuvering envelopes are discussed. A Bayesian approach is used to produce a deterministic algorithm for estimating aerodynamic system parameters from existing noisy sensor measurements, which are then used to estimate the trim envelope through efficient high- fidelity model-based computations of attainable equilibrium sets. The safe maneuverability limitations are extended beyond the trim envelope through a robust reachability analysis derived from an optimal control formulation. The trim and maneuvering envelope limits are then conveyed to pilots through three axes on the primary flight display. To evaluate the new display features, commercial airline crews flew multiple challenging approach and landing scenarios in the full motion Advanced Concepts Flight Simulator at NASA Ames Research Center, as part of a larger research initiative to investigate the impact on the energy state awareness of the crew. Results show that the additional display features have the potential to significantly improve situational awareness of the flight crew.

  2. Heightened motor and sensory (mirror-touch) referral induced by nerve block or topical anesthetic.

    PubMed

    Case, Laura K; Gosavi, Radhika; Ramachandran, Vilayanur S

    2013-08-01

    Mirror neurons allow us to covertly simulate the sensation and movement of others. If mirror neurons are sensory and motor neurons, why do we not actually feel this simulation- like "mirror-touch synesthetes"? Might afferent sensation normally inhibit mirror representations from reaching consciousness? We and others have reported heightened sensory referral to phantom limbs and temporarily anesthetized arms. These patients, however, had experienced illness or injury of the deafferented limb. In the current study we observe heightened sensory and motor referral to the face after unilateral nerve block for routine dental procedures. We also obtain double-blind, quantitative evidence of heightened sensory referral in healthy participants completing a mirror-touch confusion task after topical anesthetic cream is applied. We suggest that sensory and motor feedback exist in dynamic equilibrium with mirror representations; as feedback is reduced, the brain draws more upon visual information to determine- perhaps in a Bayesian manner- what to feel. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Bayesian data analysis in population ecology: motivations, methods, and benefits

    USGS Publications Warehouse

    Dorazio, Robert

    2016-01-01

    During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.

  4. A Bayesian-frequentist two-stage single-arm phase II clinical trial design.

    PubMed

    Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen

    2012-08-30

    It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Using SPM 12’s Second-Level Bayesian Inference Procedure for fMRI Analysis: Practical Guidelines for End Users

    PubMed Central

    Han, Hyemin; Park, Joonsuk

    2018-01-01

    Recent debates about the conventional traditional threshold used in the fields of neuroscience and psychology, namely P < 0.05, have spurred researchers to consider alternative ways to analyze fMRI data. A group of methodologists and statisticians have considered Bayesian inference as a candidate methodology. However, few previous studies have attempted to provide end users of fMRI analysis tools, such as SPM 12, with practical guidelines about how to conduct Bayesian inference. In the present study, we aim to demonstrate how to utilize Bayesian inference, Bayesian second-level inference in particular, implemented in SPM 12 by analyzing fMRI data available to public via NeuroVault. In addition, to help end users understand how Bayesian inference actually works in SPM 12, we examine outcomes from Bayesian second-level inference implemented in SPM 12 by comparing them with those from classical second-level inference. Finally, we provide practical guidelines about how to set the parameters for Bayesian inference and how to interpret the results, such as Bayes factors, from the inference. We also discuss the practical and philosophical benefits of Bayesian inference and directions for future research. PMID:29456498

  6. An introduction to Bayesian statistics in health psychology.

    PubMed

    Depaoli, Sarah; Rus, Holly M; Clifton, James P; van de Schoot, Rens; Tiemensma, Jitske

    2017-09-01

    The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of health psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation models, latent growth curve (and mixture) models, and hierarchical linear models. Likewise, Bayesian methods can be used with small sample sizes since they do not rely on large sample theory. In this article, we discuss several important components of Bayesian statistics as they relate to health-based inquiries. We discuss the incorporation and impact of prior knowledge into the estimation process and the different components of the analysis that should be reported in an article. We present an example implementing Bayesian estimation in the context of blood pressure changes after participants experienced an acute stressor. We conclude with final thoughts on the implementation of Bayesian statistics in health psychology, including suggestions for reviewing Bayesian manuscripts and grant proposals. We have also included an extensive amount of online supplementary material to complement the content presented here, including Bayesian examples using many different software programmes and an extensive sensitivity analysis examining the impact of priors.

  7. Prior approval: the growth of Bayesian methods in psychology.

    PubMed

    Andrews, Mark; Baguley, Thom

    2013-02-01

    Within the last few years, Bayesian methods of data analysis in psychology have proliferated. In this paper, we briefly review the history or the Bayesian approach to statistics, and consider the implications that Bayesian methods have for the theory and practice of data analysis in psychology.

  8. Just Perfect, Part 2

    ERIC Educational Resources Information Center

    Scott, Paul

    2007-01-01

    In "Just Perfect: Part 1," the author defined a perfect number N to be one for which the sum of the divisors d (1 less than or equal to d less than N) is N. He gave the first few perfect numbers, starting with those known by the early Greeks. In this article, the author provides an extended list of perfect numbers, with some comments about their…

  9. A local approach for focussed Bayesian fusion

    NASA Astrophysics Data System (ADS)

    Sander, Jennifer; Heizmann, Michael; Goussev, Igor; Beyerer, Jürgen

    2009-04-01

    Local Bayesian fusion approaches aim to reduce high storage and computational costs of Bayesian fusion which is separated from fixed modeling assumptions. Using the small world formalism, we argue why this proceeding is conform with Bayesian theory. Then, we concentrate on the realization of local Bayesian fusion by focussing the fusion process solely on local regions that are task relevant with a high probability. The resulting local models correspond then to restricted versions of the original one. In a previous publication, we used bounds for the probability of misleading evidence to show the validity of the pre-evaluation of task specific knowledge and prior information which we perform to build local models. In this paper, we prove the validity of this proceeding using information theoretic arguments. For additional efficiency, local Bayesian fusion can be realized in a distributed manner. Here, several local Bayesian fusion tasks are evaluated and unified after the actual fusion process. For the practical realization of distributed local Bayesian fusion, software agents are predestinated. There is a natural analogy between the resulting agent based architecture and criminal investigations in real life. We show how this analogy can be used to improve the efficiency of distributed local Bayesian fusion additionally. Using a landscape model, we present an experimental study of distributed local Bayesian fusion in the field of reconnaissance, which highlights its high potential.

  10. A Bayesian Nonparametric Approach to Test Equating

    ERIC Educational Resources Information Center

    Karabatsos, George; Walker, Stephen G.

    2009-01-01

    A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…

  11. Bayesian Model Averaging for Propensity Score Analysis

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  12. Generating perfect fluid spheres in general relativity

    NASA Astrophysics Data System (ADS)

    Boonserm, Petarpa; Visser, Matt; Weinfurtner, Silke

    2005-06-01

    Ever since Karl Schwarzschild’s 1916 discovery of the spacetime geometry describing the interior of a particular idealized general relativistic star—a static spherically symmetric blob of fluid with position-independent density—the general relativity community has continued to devote considerable time and energy to understanding the general-relativistic static perfect fluid sphere. Over the last 90 years a tangle of specific perfect fluid spheres has been discovered, with most of these specific examples seemingly independent from each other. To bring some order to this collection, in this article we develop several new transformation theorems that map perfect fluid spheres into perfect fluid spheres. These transformation theorems sometimes lead to unexpected connections between previously known perfect fluid spheres, sometimes lead to new previously unknown perfect fluid spheres, and in general can be used to develop a systematic way of classifying the set of all perfect fluid spheres.

  13. Semisupervised learning using Bayesian interpretation: application to LS-SVM.

    PubMed

    Adankon, Mathias M; Cheriet, Mohamed; Biem, Alain

    2011-04-01

    Bayesian reasoning provides an ideal basis for representing and manipulating uncertain knowledge, with the result that many interesting algorithms in machine learning are based on Bayesian inference. In this paper, we use the Bayesian approach with one and two levels of inference to model the semisupervised learning problem and give its application to the successful kernel classifier support vector machine (SVM) and its variant least-squares SVM (LS-SVM). Taking advantage of Bayesian interpretation of LS-SVM, we develop a semisupervised learning algorithm for Bayesian LS-SVM using our approach based on two levels of inference. Experimental results on both artificial and real pattern recognition problems show the utility of our method.

  14. Proto-jet configurations in RADs orbiting a Kerr SMBH: symmetries and limiting surfaces

    NASA Astrophysics Data System (ADS)

    Pugliese, D.; Stuchlík, Z.

    2018-05-01

    Ringed accretion disks (RADs) are agglomerations of perfect-fluid tori orbiting around a single central attractor that could arise during complex matter inflows in active galactic nuclei. We focus our analysis to axi-symmetric accretion tori orbiting in the equatorial plane of a supermassive Kerr black hole; equilibrium configurations, possible instabilities, and evolutionary sequences of RADs were discussed in our previous works. In the present work we discuss special instabilities related to open equipotential surfaces governing the material funnels emerging at various regions of the RADs, being located between two or more individual toroidal configurations of the agglomerate. These open structures could be associated to proto-jets. Boundary limiting surfaces are highlighted, connecting the emergency of the jet-like instabilities with the black hole dimensionless spin. These instabilities are observationally significant for active galactic nuclei, being related to outflows of matter in jets emerging from more than one torus of RADs orbiting around supermassive black holes.

  15. Quantized edge modes in atomic-scale point contacts in graphene

    NASA Astrophysics Data System (ADS)

    Kinikar, Amogh; Phanindra Sai, T.; Bhattacharyya, Semonti; Agarwala, Adhip; Biswas, Tathagata; Sarker, Sanjoy K.; Krishnamurthy, H. R.; Jain, Manish; Shenoy, Vijay B.; Ghosh, Arindam

    2017-07-01

    The zigzag edges of single- or few-layer graphene are perfect one-dimensional conductors owing to a set of gapless states that are topologically protected against backscattering. Direct experimental evidence of these states has been limited so far to their local thermodynamic and magnetic properties, determined by the competing effects of edge topology and electron-electron interaction. However, experimental signatures of edge-bound electrical conduction have remained elusive, primarily due to the lack of graphitic nanostructures with low structural and/or chemical edge disorder. Here, we report the experimental detection of edge-mode electrical transport in suspended atomic-scale constrictions of single and multilayer graphene created during nanomechanical exfoliation of highly oriented pyrolytic graphite. The edge-mode transport leads to the observed quantization of conductance close to multiples of G0 = 2e2/h. At the same time, conductance plateaux at G0/2 and a split zero-bias anomaly in non-equilibrium transport suggest conduction via spin-polarized states in the presence of an electron-electron interaction.

  16. A quantum Otto engine with finite heat baths: energy, correlations, and degradation

    NASA Astrophysics Data System (ADS)

    Pozas-Kerstjens, Alejandro; Brown, Eric G.; Hovhannisyan, Karen V.

    2018-04-01

    We study a driven harmonic oscillator operating an Otto cycle by strongly interacting with two thermal baths of finite size. Using the tools of Gaussian quantum mechanics, we directly simulate the dynamics of the engine as a whole, without the need to make any approximations. This allows us to understand the non-equilibrium thermodynamics of the engine not only from the perspective of the working medium, but also as it is seen from the thermal baths’ standpoint. For sufficiently large baths, our engine is capable of running a number of perfect cycles, delivering finite power while operating very close to maximal efficiency. Thereafter, having traversed the baths, the perturbations created by the interaction abruptly deteriorate the engine’s performance. We additionally study the correlations generated in the system, and, in particular, we find a direct connection between the build up of bath–bath correlations and the degradation of the engine’s performance over the course of many cycles.

  17. Boundary Layer Transition Correlations and Aeroheating Predictions for Mars Smart Lander

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Liechty, Derek S.

    2002-01-01

    Laminar and turbulent perfect-gas air, Navier-Stokes computations have been performed for a proposed Mars Smart Lander entry vehicle at Mach 6 over a free stream Reynolds number range of 6.9 x 10(exp 6)/m to 2.4 x 10(exp 7)/m (2.1 x 10(exp 6)/ft to 7.3 x 10(exp 6)/ft) for angles-of-attack of 0-deg, 11-deg, 16-deg, and 20-deg, and comparisons were made to wind tunnel heating data obtained a t the same conditions. Boundary layer edge properties were extracted from the solutions and used to correlate experimental data on the effects of heat-shield penetrations (bolt-holes where the entry vehicle would be attached to the propulsion module during transit to Mars) on boundary-layer transition. A non-equilibrium Martian-atmosphere computation was performed for the peak heating point on the entry trajectory in order to determine if the penetrations would produce boundary-layer transition by using this correlation.

  18. Boundary Layer Transition Correlations and Aeroheating Predictions for Mars Smart Lander

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Liechty, Derek S.

    2002-01-01

    Laminar and turbulent perfect-gas air, Navier-Stokes computations have been performed for a proposed Mars Smart Lander entry vehicle at Mach 6 over a free stream Reynolds number range of 6.9 x 10(exp 6/m to 2.4 x 10(exp 7)m(2.1 x 10(exp 6)/ft to 7.3 x 10(exp 6)ft) for angles-of-attack of 0-deg, 11-deg, 16-deg, and 20-deg, and comparisons were made to wind tunnel heating data obtained at the same conditions. Boundary layer edge properties were extracted from the solutions and used to correlate experimental data on the effects of heat-shield penetrations (bolt-holes where the entry vehicle would be attached to the propulsion module during transit to Mars) on boundary-layer transition. A non-equilibrium Martian-atmosphere computation was performed for the peak heating point on the entry trajectory in order to determine if the penetrations would produce boundary-layer transition by using this correlation.

  19. Bipolar magnetic semiconductor in silicene nanoribbons

    NASA Astrophysics Data System (ADS)

    Farghadan, Rouhollah

    2017-08-01

    A theoretical study was presented on generation of spin polarization in silicene nanoribbons using the single-band tight-binding approximation and the non-equilibrium Green's function formalism. We focused on the effect of electric and exchange magnetic fields on the spin-filter capabilities of zigzag-edge silicene nanoribbons in the presence of the intrinsic spin-orbit interaction. The results show that a robust bipolar magnetic semiconductor with controllable spin-flip and spin-conserved gaps can be obtained when exchange magnetic and electric field strengths are both larger than the intrinsic spin-orbit interaction. Therefore, zigzag silicene nanoribbons could act as bipolar and perfect spin filter devices with a large spin-polarized current and a reversible spin polarization in the vicinity of the Fermi energy. We also investigated the effect of edge roughness and found that the bipolar magnetic semiconductor features are robust against edge disorder in silicene nanoribbon junctions. These results may be useful in multifunctional spin devices based on silicene nanoribbons.

  20. Finite-size effects in simulations of electrolyte solutions under periodic boundary conditions

    NASA Astrophysics Data System (ADS)

    Thompson, Jeffrey; Sanchez, Isaac

    The equilibrium properties of charged systems with periodic boundary conditions may exhibit pronounced system-size dependence due to the long range of the Coulomb force. As shown by others, the leading-order finite-size correction to the Coulomb energy of a charged fluid confined to a periodic box of volume V may be derived from sum rules satisfied by the charge-charge correlations in the thermodynamic limit V -> ∞ . In classical systems, the relevant sum rule is the Stillinger-Lovett second-moment (or perfect screening) condition. This constraint implies that for large V, periodicity induces a negative bias of -kB T(2 V) - 1 in the total Coulomb energy density of a homogeneous classical charged fluid of given density and temperature. We present a careful study of the impact of such finite-size effects on the calculation of solute chemical potentials from explicit-solvent molecular simulations of aqueous electrolyte solutions. National Science Foundation Graduate Research Fellowship Program, Grant No. DGE-1610403.

  1. Does Risk Aversion Affect Transmission and Generation Planning? A Western North America Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munoz, Francisco; van der Weijde, Adriaan Hendrik; Hobbs, Benjamin F.

    Here, we investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We also show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk-averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. Furthermore, this model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, inmore » which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models.« less

  2. Does Risk Aversion Affect Transmission and Generation Planning? A Western North America Case Study

    DOE PAGES

    Munoz, Francisco; van der Weijde, Adriaan Hendrik; Hobbs, Benjamin F.; ...

    2017-04-07

    Here, we investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We also show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk-averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. Furthermore, this model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, inmore » which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models.« less

  3. Glassy aging with modified Kohlrausch-Williams-Watts form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen Gupta, Bhaskar; Das, Shankar P.

    2007-12-15

    In this paper, we address the question of whether aging in the nonequilibrium glassy state is controlled by the equilibrium {alpha}-relaxation process, which occurs at temperatures above T{sub g}. Recently, Lunkenheimer et al. [Phys. Rev. Lett. 95, 055702 (2005)] proposed a model for the glassy aging data of dielectric relaxation using a modified Kohlrausch-Williams-Watts form exp[-(t{sub age}/{tau}{sub age}){sup {beta}{sub age}}]. The aging time t{sub age} dependence of the relaxation time {tau}{sub age} is defined by these authors through a functional relation involving the corresponding frequency {nu}(t{sub age})=1/(2{pi}{tau}{sub age}), but the stretching exponent {beta}{sub age} is the same as {beta}{sub {alpha}},more » the {alpha}-relaxation stretching exponent. We present here an alternative functional form for {tau}{sub age}(t{sub age}) directly involving the relaxation time itself. The proposed model fits the data of Lunkenheimer et al. perfectly with a stretching exponent {beta}{sub age} different from {beta}{sub {alpha}}.« less

  4. Atomic density functional and diagram of structures in the phase field crystal model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ankudinov, V. E., E-mail: vladimir@ankudinov.org; Galenko, P. K.; Kropotin, N. V.

    2016-02-15

    The phase field crystal model provides a continual description of the atomic density over the diffusion time of reactions. We consider a homogeneous structure (liquid) and a perfect periodic crystal, which are constructed from the one-mode approximation of the phase field crystal model. A diagram of 2D structures is constructed from the analytic solutions of the model using atomic density functionals. The diagram predicts equilibrium atomic configurations for transitions from the metastable state and includes the domains of existence of homogeneous, triangular, and striped structures corresponding to a liquid, a body-centered cubic crystal, and a longitudinal cross section of cylindricalmore » tubes. The method developed here is employed for constructing the diagram for the homogeneous liquid phase and the body-centered iron lattice. The expression for the free energy is derived analytically from density functional theory. The specific features of approximating the phase field crystal model are compared with the approximations and conclusions of the weak crystallization and 2D melting theories.« less

  5. Strain effect on the heat transport properties of bismuth telluride nanofilms with a hole

    NASA Astrophysics Data System (ADS)

    Fang, Te-Hua; Chang, Win-Jin; Wang, Kuan-Yu; Huang, Chao-Chun

    2018-06-01

    We investigated the mechanical behavior of bismuth telluride nanofilms with holes by using an equilibrium molecular dynamics (MD) approach. The holes had diameters of 20, 30, 40, and 50 Å. The thermal conductivity values of the nanofilms were calculated under different strains at different temperatures using a nonequilibrium MD simulation. The simulation revealed that the thermal conductivity of a bismuth telluride nanofilm with a hole decreases with an increase in hole diameter at different strains. For a film with a perfect structure at 300 K, a 48% reduction (from 0.33 to 0.17 W/m K) in the thermal conductivity was observed at a 7% tensile strain. In addition, the thermal conductivity increased by approximately 39% (from 0.33 to 0.46 W/m K) at a 7% compressive strain. A very low value (0.11 W/m K) of thermal conductivity is obtained for the nanofilm with a hole diameter of 50 Å at a 7% tensile strain at 300 K.

  6. A combustion model for studying the effects of ideal gas properties on jet noise

    NASA Astrophysics Data System (ADS)

    Jacobs, Jerin; Tinney, Charles

    2016-11-01

    A theoretical combustion model is developed to simulate the influence of ideal gas effects on various aeroacoustic parameters over a range of equivalence ratios. The motivation is to narrow the gap between laboratory and full-scale jet noise testing. The combustion model is used to model propane combustion in air and kerosene combustion in air. Gas properties from the combustion model are compared to real lab data acquired at the National Center for Physical Acoustics at the University of Mississippi as well as outputs from NASA's Chemical Equilibrium Analysis code. Different jet properties are then studied over a range of equivalence ratios and pressure ratios for propane combustion in air, kerosene combustion in air and heated air. The findings reveal negligible differences between the three constituents where the density and sound speed ratios are concerned. Albeit, the area ratio required for perfectly expanded flow is shown to be more sensitive to gas properties, relative to changes in the temperature ratio.

  7. Two-layer convective heating prediction procedures and sensitivities for blunt body reentry vehicles

    NASA Technical Reports Server (NTRS)

    Bouslog, Stanley A.; An, Michael Y.; Wang, K. C.; Tam, Luen T.; Caram, Jose M.

    1993-01-01

    This paper provides a description of procedures typically used to predict convective heating rates to hypersonic reentry vehicles using the two-layer method. These procedures were used to compute the pitch-plane heating distributions to the Apollo geometry for a wind tunnel test case and for three flight cases. Both simple engineering methods and coupled inviscid/boundary layer solutions were used to predict the heating rates. The sensitivity of the heating results in the choice of metrics, pressure distributions, boundary layer edge conditions, and wall catalycity used in the heating analysis were evaluated. Streamline metrics, pressure distributions, and boundary layer edge properties were defined from perfect gas (wind tunnel case) and chemical equilibrium and nonequilibrium (flight cases) inviscid flow-field solutions. The results of this study indicated that the use of CFD-derived metrics and pressures provided better predictions of heating when compared to wind tunnel test data. The study also showed that modeling entropy layer swallowing and ionization had little effect on the heating predictions.

  8. Decoherence in yeast cell populations and its implications for genome-wide expression noise.

    PubMed

    Briones, M R S; Bosco, F

    2009-01-20

    Gene expression "noise" is commonly defined as the stochastic variation of gene expression levels in different cells of the same population under identical growth conditions. Here, we tested whether this "noise" is amplified with time, as a consequence of decoherence in global gene expression profiles (genome-wide microarrays) of synchronized cells. The stochastic component of transcription causes fluctuations that tend to be amplified as time progresses, leading to a decay of correlations of expression profiles, in perfect analogy with elementary relaxation processes. Measuring decoherence, defined here as a decay in the auto-correlation function of yeast genome-wide expression profiles, we found a slowdown in the decay of correlations, opposite to what would be expected if, as in mixing systems, correlations decay exponentially as the equilibrium state is reached. Our results indicate that the populational variation in gene expression (noise) is a consequence of temporal decoherence, in which the slow decay of correlations is a signature of strong interdependence of the transcription dynamics of different genes.

  9. Experimental observation of edge transport in graphene nanostructures

    NASA Astrophysics Data System (ADS)

    Kinikar, Amogh; Sai, T. Phanindra; Bhattacharyya, Semonti; Agarwala, Adhip; Biswas, Tathagata; Sarker, Sanjoy K.; Krishnamurthy, H. R.; Jain, Manish; Shenoy, Vijay B.; Ghosh, Arindam

    The zizzag edges of graphene, whether single or few layers, host zero energy gapless states and are perfect 1D ballistic conductors. Conclusive observations of electrical conduction through edge states has been elusive. We report the observation of edge bound transport in atomic-scale constrictions of single and multilayer suspended graphene created stochastically by nanomechanical exfoliation of graphite. We observe that the conductance is quantized in near multiples of e2/h. Non-equilibrium transport shows a split zero bias anomaly and, the magneto-conductance is hysteretic; indicating that the electron transport is through spin polarized edge states in the presence of electron-electron interaction. Atomic force microscope scans on the graphite surface post exfoliation reveal that the final constriction is usually a single layer graphene with a constricting angle of 30o. Tearing along crystallographic angles suggests the tears occur along zigzag and armchair configurations with high fidelity of the edge morphology. We acknowledge the financial support from the DST, Government of India. SS acknowledges support from the NSF (DMR-1508680).

  10. Generalized model of island biodiversity

    NASA Astrophysics Data System (ADS)

    Kessler, David A.; Shnerb, Nadav M.

    2015-04-01

    The dynamics of a local community of competing species with weak immigration from a static regional pool is studied. Implementing the generalized competitive Lotka-Volterra model with demographic noise, a rich dynamics with four qualitatively distinct phases is unfolded. When the overall interspecies competition is weak, the island species recapitulate the mainland species. For higher values of the competition parameter, the system still admits an equilibrium community, but now some of the mainland species are absent on the island. Further increase in competition leads to an intermittent "disordered" phase, where the dynamics is controlled by invadable combinations of species and the turnover rate is governed by the migration. Finally, the strong competition phase is glasslike, dominated by uninvadable states and noise-induced transitions. Our model contains, as a special case, the celebrated neutral island theories of Wilson-MacArthur and Hubbell. Moreover, we show that slight deviations from perfect neutrality may lead to each of the phases, as the Hubbell point appears to be quadracritical.

  11. Generalized model of island biodiversity.

    PubMed

    Kessler, David A; Shnerb, Nadav M

    2015-04-01

    The dynamics of a local community of competing species with weak immigration from a static regional pool is studied. Implementing the generalized competitive Lotka-Volterra model with demographic noise, a rich dynamics with four qualitatively distinct phases is unfolded. When the overall interspecies competition is weak, the island species recapitulate the mainland species. For higher values of the competition parameter, the system still admits an equilibrium community, but now some of the mainland species are absent on the island. Further increase in competition leads to an intermittent "disordered" phase, where the dynamics is controlled by invadable combinations of species and the turnover rate is governed by the migration. Finally, the strong competition phase is glasslike, dominated by uninvadable states and noise-induced transitions. Our model contains, as a special case, the celebrated neutral island theories of Wilson-MacArthur and Hubbell. Moreover, we show that slight deviations from perfect neutrality may lead to each of the phases, as the Hubbell point appears to be quadracritical.

  12. Cooperate without looking: why we care what people think and not just what they do.

    PubMed

    Hoffman, Moshe; Yoeli, Erez; Nowak, Martin A

    2015-02-10

    Evolutionary game theory typically focuses on actions but ignores motives. Here, we introduce a model that takes into account the motive behind the action. A crucial question is why do we trust people more who cooperate without calculating the costs? We propose a game theory model to explain this phenomenon. One player has the option to "look" at the costs of cooperation, and the other player chooses whether to continue the interaction. If it is occasionally very costly for player 1 to cooperate, but defection is harmful for player 2, then cooperation without looking is a subgame perfect equilibrium. This behavior also emerges in population-based processes of learning or evolution. Our theory illuminates a number of key phenomena of human interactions: authentic altruism, why people cooperate intuitively, one-shot cooperation, why friends do not keep track of favors, why we admire principled people, Kant's second formulation of the Categorical Imperative, taboos, and love.

  13. Quantized edge modes in atomic-scale point contacts in graphene.

    PubMed

    Kinikar, Amogh; Phanindra Sai, T; Bhattacharyya, Semonti; Agarwala, Adhip; Biswas, Tathagata; Sarker, Sanjoy K; Krishnamurthy, H R; Jain, Manish; Shenoy, Vijay B; Ghosh, Arindam

    2017-07-01

    The zigzag edges of single- or few-layer graphene are perfect one-dimensional conductors owing to a set of gapless states that are topologically protected against backscattering. Direct experimental evidence of these states has been limited so far to their local thermodynamic and magnetic properties, determined by the competing effects of edge topology and electron-electron interaction. However, experimental signatures of edge-bound electrical conduction have remained elusive, primarily due to the lack of graphitic nanostructures with low structural and/or chemical edge disorder. Here, we report the experimental detection of edge-mode electrical transport in suspended atomic-scale constrictions of single and multilayer graphene created during nanomechanical exfoliation of highly oriented pyrolytic graphite. The edge-mode transport leads to the observed quantization of conductance close to multiples of G 0  = 2e 2 /h. At the same time, conductance plateaux at G 0 /2 and a split zero-bias anomaly in non-equilibrium transport suggest conduction via spin-polarized states in the presence of an electron-electron interaction.

  14. Game theoretic sensor management for target tracking

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Blasch, Erik; Pham, Khanh; Douville, Philip; Yang, Chun; Kadar, Ivan

    2010-04-01

    This paper develops and evaluates a game-theoretic approach to distributed sensor-network management for target tracking via sensor-based negotiation. We present a distributed sensor-based negotiation game model for sensor management for multi-sensor multi-target tacking situations. In our negotiation framework, each negotiation agent represents a sensor and each sensor maximizes their utility using a game approach. The greediness of each sensor is limited by the fact that the sensor-to-target assignment efficiency will decrease if too many sensor resources are assigned to a same target. It is similar to the market concept in real world, such as agreements between buyers and sellers in an auction market. Sensors are willing to switch targets so that they can obtain their highest utility and the most efficient way of applying their resources. Our sub-game perfect equilibrium-based negotiation strategies dynamically and distributedly assign sensors to targets. Numerical simulations are performed to demonstrate our sensor-based negotiation approach for distributed sensor management.

  15. Cooperate without looking: Why we care what people think and not just what they do

    PubMed Central

    Hoffman, Moshe; Yoeli, Erez; Nowak, Martin A.

    2015-01-01

    Evolutionary game theory typically focuses on actions but ignores motives. Here, we introduce a model that takes into account the motive behind the action. A crucial question is why do we trust people more who cooperate without calculating the costs? We propose a game theory model to explain this phenomenon. One player has the option to “look” at the costs of cooperation, and the other player chooses whether to continue the interaction. If it is occasionally very costly for player 1 to cooperate, but defection is harmful for player 2, then cooperation without looking is a subgame perfect equilibrium. This behavior also emerges in population-based processes of learning or evolution. Our theory illuminates a number of key phenomena of human interactions: authentic altruism, why people cooperate intuitively, one-shot cooperation, why friends do not keep track of favors, why we admire principled people, Kant’s second formulation of the Categorical Imperative, taboos, and love. PMID:25624473

  16. Clinical high-resolution mapping of the proteoglycan-bound water fraction in articular cartilage of the human knee joint.

    PubMed

    Bouhrara, Mustapha; Reiter, David A; Sexton, Kyle W; Bergeron, Christopher M; Zukley, Linda M; Spencer, Richard G

    2017-11-01

    We applied our recently introduced Bayesian analytic method to achieve clinically-feasible in-vivo mapping of the proteoglycan water fraction (PgWF) of human knee cartilage with improved spatial resolution and stability as compared to existing methods. Multicomponent driven equilibrium single-pulse observation of T 1 and T 2 (mcDESPOT) datasets were acquired from the knees of two healthy young subjects and one older subject with previous knee injury. Each dataset was processed using Bayesian Monte Carlo (BMC) analysis incorporating a two-component tissue model. We assessed the performance and reproducibility of BMC and of the conventional analysis of stochastic region contraction (SRC) in the estimation of PgWF. Stability of the BMC analysis of PgWF was tested by comparing independent high-resolution (HR) datasets from each of the two young subjects. Unlike SRC, the BMC-derived maps from the two HR datasets were essentially identical. Furthermore, SRC maps showed substantial random variation in estimated PgWF, and mean values that differed from those obtained using BMC. In addition, PgWF maps derived from conventional low-resolution (LR) datasets exhibited partial volume and magnetic susceptibility effects. These artifacts were absent in HR PgWF images. Finally, our analysis showed regional variation in PgWF estimates, and substantially higher values in the younger subjects as compared to the older subject. BMC-mcDESPOT permits HR in-vivo mapping of PgWF in human knee cartilage in a clinically-feasible acquisition time. HR mapping reduces the impact of partial volume and magnetic susceptibility artifacts compared to LR mapping. Finally, BMC-mcDESPOT demonstrated excellent reproducibility in the determination of PgWF. Published by Elsevier Inc.

  17. Monte Carlo Bayesian Inference on a Statistical Model of Sub-gridcolumn Moisture Variability Using High-resolution Cloud Observations . Part II; Sensitivity Tests and Results

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo M.; Norris, Peter M.

    2013-01-01

    Part I presented a Monte Carlo Bayesian method for constraining a complex statistical model of GCM sub-gridcolumn moisture variability using high-resolution MODIS cloud data, thereby permitting large-scale model parameter estimation and cloud data assimilation. This part performs some basic testing of this new approach, verifying that it does indeed significantly reduce mean and standard deviation biases with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud top pressure, and that it also improves the simulated rotational-Ramman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the OMI instrument. Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows finite jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. This paper also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in the cloud observables on cloud vertical structure, beyond cloud top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard (1998) provides some help in this respect, by better honoring inversion structures in the background state.

  18. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Ying -Qi; Segall, Paul; Bradley, Andrew

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less

  19. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    NASA Astrophysics Data System (ADS)

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle

    2017-10-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ˜10-11.4m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  20. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    DOE PAGES

    Wong, Ying -Qi; Segall, Paul; Bradley, Andrew; ...

    2017-10-04

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less

  1. Constraining the magmatic system at Mount St. Helens (2004–2008) using Bayesian inversion with physics-based models including gas escape and crystallization

    USGS Publications Warehouse

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle R.

    2017-01-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5wt%) total volatiles and that the magma permeability scale is well constrained at ~10-11.4 m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  2. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability using High-Resolution Cloud Observations

    NASA Astrophysics Data System (ADS)

    Norris, P. M.; da Silva, A. M., Jr.

    2016-12-01

    Norris and da Silva recently published a method to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation (CDA). The gridcolumn model includes assumed-PDF intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used are MODIS cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. The new approach not only significantly reduces mean and standard deviation biases with respect to the assimilated observables, but also improves the simulated rotational-Ramman scattering cloud optical centroid pressure against independent (non-assimilated) retrievals from the OMI instrument. One obvious difficulty for the method, and other CDA methods, is the lack of information content in passive cloud observables on cloud vertical structure, beyond cloud-top and thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard is helpful, better honoring inversion structures in the background state.

  3. Bayesian Regression with Network Prior: Optimal Bayesian Filtering Perspective

    PubMed Central

    Qian, Xiaoning; Dougherty, Edward R.

    2017-01-01

    The recently introduced intrinsically Bayesian robust filter (IBRF) provides fully optimal filtering relative to a prior distribution over an uncertainty class ofjoint random process models, whereas formerly the theory was limited to model-constrained Bayesian robust filters, for which optimization was limited to the filters that are optimal for models in the uncertainty class. This paper extends the IBRF theory to the situation where there are both a prior on the uncertainty class and sample data. The result is optimal Bayesian filtering (OBF), where optimality is relative to the posterior distribution derived from the prior and the data. The IBRF theories for effective characteristics and canonical expansions extend to the OBF setting. A salient focus of the present work is to demonstrate the advantages of Bayesian regression within the OBF setting over the classical Bayesian approach in the context otlinear Gaussian models. PMID:28824268

  4. An introduction to using Bayesian linear regression with clinical data.

    PubMed

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A SAS Interface for Bayesian Analysis with WinBUGS

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; McArdle, John J.; Wang, Lijuan; Hamagami, Fumiaki

    2008-01-01

    Bayesian methods are becoming very popular despite some practical difficulties in implementation. To assist in the practical application of Bayesian methods, we show how to implement Bayesian analysis with WinBUGS as part of a standard set of SAS routines. This implementation procedure is first illustrated by fitting a multiple regression model…

  6. BMDS: A Collection of R Functions for Bayesian Multidimensional Scaling

    ERIC Educational Resources Information Center

    Okada, Kensuke; Shigemasu, Kazuo

    2009-01-01

    Bayesian multidimensional scaling (MDS) has attracted a great deal of attention because: (1) it provides a better fit than do classical MDS and ALSCAL; (2) it provides estimation errors of the distances; and (3) the Bayesian dimension selection criterion, MDSIC, provides a direct indication of optimal dimensionality. However, Bayesian MDS is not…

  7. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study

    ERIC Educational Resources Information Center

    Kaplan, David; Chen, Jianshen

    2012-01-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for…

  8. Bayesian inference for psychology. Part II: Example applications with JASP.

    PubMed

    Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D

    2018-02-01

    Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

  9. Applying Bayesian statistics to the study of psychological trauma: A suggestion for future research.

    PubMed

    Yalch, Matthew M

    2016-03-01

    Several contemporary researchers have noted the virtues of Bayesian methods of data analysis. Although debates continue about whether conventional or Bayesian statistics is the "better" approach for researchers in general, there are reasons why Bayesian methods may be well suited to the study of psychological trauma in particular. This article describes how Bayesian statistics offers practical solutions to the problems of data non-normality, small sample size, and missing data common in research on psychological trauma. After a discussion of these problems and the effects they have on trauma research, this article explains the basic philosophical and statistical foundations of Bayesian statistics and how it provides solutions to these problems using an applied example. Results of the literature review and the accompanying example indicates the utility of Bayesian statistics in addressing problems common in trauma research. Bayesian statistics provides a set of methodological tools and a broader philosophical framework that is useful for trauma researchers. Methodological resources are also provided so that interested readers can learn more. (c) 2016 APA, all rights reserved).

  10. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  11. Metamaterial Perfect Absorber Analyzed by a Meta-Cavity Model Consisting of Multilayer Metasurfaces (Postprint)

    DTIC Science & Technology

    2017-09-05

    metamaterial perfect absorber behaves as a meta-cavity bounded between a resonant metasurface and a metallic thin- film reflector. The perfect absorption...cavity quantum electrodynamics devices. 15. SUBJECT TERMS Metamaterial; meta-cavity; metallic thin- film reflector; Fabry-Perot cavity resonance...metamaterial perfect absorber behaves as a meta-cavity bounded between a resonant metasurface and a metallic thin- film reflector. The perfect absorption is

  12. Casimir effect for perfect electromagnetic conductors (PEMCs): a sum rule for attractive/repulsive forces

    NASA Astrophysics Data System (ADS)

    Rode, Stefan; Bennett, Robert; Yoshi Buhmann, Stefan

    2018-04-01

    We discuss the Casimir effect for boundary conditions involving perfect electromagnetic conductors, which interpolate between perfect electric conductors and perfect magnetic conductors. Based on the corresponding reciprocal Green’s tensor we construct the Green’s tensor for two perfectly reflecting plates with magnetoelectric coupling (non-reciprocal media) within the framework of macroscopic quantum electrodynamics. We calculate the Casimir force between two arbitrary perfect electromagnetic conductor plates, resulting in a universal analytic expression that connects the attractive Casimir force with the repulsive Boyer force. We relate the results to a duality symmetry of electromagnetism.

  13. Embedding the results of focussed Bayesian fusion into a global context

    NASA Astrophysics Data System (ADS)

    Sander, Jennifer; Heizmann, Michael

    2014-05-01

    Bayesian statistics offers a well-founded and powerful fusion methodology also for the fusion of heterogeneous information sources. However, except in special cases, the needed posterior distribution is not analytically derivable. As consequence, Bayesian fusion may cause unacceptably high computational and storage costs in practice. Local Bayesian fusion approaches aim at reducing the complexity of the Bayesian fusion methodology significantly. This is done by concentrating the actual Bayesian fusion on the potentially most task relevant parts of the domain of the Properties of Interest. Our research on these approaches is motivated by an analogy to criminal investigations where criminalists pursue clues also only locally. This publication follows previous publications on a special local Bayesian fusion technique called focussed Bayesian fusion. Here, the actual calculation of the posterior distribution gets completely restricted to a suitably chosen local context. By this, the global posterior distribution is not completely determined. Strategies for using the results of a focussed Bayesian analysis appropriately are needed. In this publication, we primarily contrast different ways of embedding the results of focussed Bayesian fusion explicitly into a global context. To obtain a unique global posterior distribution, we analyze the application of the Maximum Entropy Principle that has been shown to be successfully applicable in metrology and in different other areas. To address the special need for making further decisions subsequently to the actual fusion task, we further analyze criteria for decision making under partial information.

  14. Bayesian flood forecasting methods: A review

    NASA Astrophysics Data System (ADS)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.

  15. The Psychology of Bayesian Reasoning

    DTIC Science & Technology

    2014-10-21

    The psychology of Bayesian reasoning David R. Mandel* Socio-Cognitive Systems Section, Defence Research and Development Canada and Department...belief revision, subjective probability, human judgment, psychological methods. Most psychological research on Bayesian reasoning since the 1970s has...attention to some important problems with the conventional approach to studying Bayesian reasoning in psychology that has been dominant since the

  16. Bayesian Just-So Stories in Psychology and Neuroscience

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Davis, Colin J.

    2012-01-01

    According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak.…

  17. Teaching Bayesian Statistics in a Health Research Methodology Program

    ERIC Educational Resources Information Center

    Pullenayegum, Eleanor M.; Thabane, Lehana

    2009-01-01

    Despite the appeal of Bayesian methods in health research, they are not widely used. This is partly due to a lack of courses in Bayesian methods at an appropriate level for non-statisticians in health research. Teaching such a course can be challenging because most statisticians have been taught Bayesian methods using a mathematical approach, and…

  18. Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model

    NASA Astrophysics Data System (ADS)

    Al Sobhi, Mashail M.

    2015-02-01

    Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.

  19. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    PubMed

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  20. BATSE gamma-ray burst line search. 2: Bayesian consistency methodology

    NASA Technical Reports Server (NTRS)

    Band, D. L.; Ford, L. A.; Matteson, J. L.; Briggs, M.; Paciesas, W.; Pendleton, G.; Preece, R.; Palmer, D.; Teegarden, B.; Schaefer, B.

    1994-01-01

    We describe a Bayesian methodology to evaluate the consistency between the reported Ginga and Burst and Transient Source Experiment (BATSE) detections of absorption features in gamma-ray burst spectra. Currently no features have been detected by BATSE, but this methodology will still be applicable if and when such features are discovered. The Bayesian methodology permits the comparison of hypotheses regarding the two detectors' observations and makes explicit the subjective aspects of our analysis (e.g., the quantification of our confidence in detector performance). We also present non-Bayesian consistency statistics. Based on preliminary calculations of line detectability, we find that both the Bayesian and non-Bayesian techniques show that the BATSE and Ginga observations are consistent given our understanding of these detectors.

  1. Application of Bayesian Approach in Cancer Clinical Trial

    PubMed Central

    Bhattacharjee, Atanu

    2014-01-01

    The application of Bayesian approach in clinical trials becomes more useful over classical method. It is beneficial from design to analysis phase. The straight forward statement is possible to obtain through Bayesian about the drug treatment effect. Complex computational problems are simple to handle with Bayesian techniques. The technique is only feasible to performing presence of prior information of the data. The inference is possible to establish through posterior estimates. However, some limitations are present in this method. The objective of this work was to explore the several merits and demerits of Bayesian approach in cancer research. The review of the technique will be helpful for the clinical researcher involved in the oncology to explore the limitation and power of Bayesian techniques. PMID:29147387

  2. Probability distributions of molecular observables computed from Markov models. II. Uncertainties in observables and their time-evolution

    NASA Astrophysics Data System (ADS)

    Chodera, John D.; Noé, Frank

    2010-09-01

    Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.

  3. When new human-modified habitats favour the expansion of an amphibian pioneer species: Evolutionary history of the natterjack toad (Bufo calamita) in a coal basin.

    PubMed

    Faucher, Leslie; Hénocq, Laura; Vanappelghem, Cédric; Rondel, Stéphanie; Quevillart, Robin; Gallina, Sophie; Godé, Cécile; Jaquiéry, Julie; Arnaud, Jean-François

    2017-09-01

    Human activities affect microevolutionary dynamics by inducing environmental changes. In particular, land cover conversion and loss of native habitats decrease genetic diversity and jeopardize the adaptive ability of populations. Nonetheless, new anthropogenic habitats can also promote the successful establishment of emblematic pioneer species. We investigated this issue by examining the population genetic features and evolutionary history of the natterjack toad (Bufo [Epidalea] calamita) in northern France, where populations can be found in native coastal habitats and coalfield habitats shaped by European industrial history, along with an additional set of European populations located outside this focal area. We predicted contrasting patterns of genetic structure, with newly settled coalfield populations departing from migration-drift equilibrium. As expected, coalfield populations showed a mosaic of genetically divergent populations with short-range patterns of gene flow, and native coastal populations indicated an equilibrium state with an isolation-by-distance pattern suggestive of postglacial range expansion. However, coalfield populations exhibited (i) high levels of genetic diversity, (ii) no evidence of local inbreeding or reduced effective population size and (iii) multiple maternal mitochondrial lineages, a genetic footprint depicting independent colonization events. Furthermore, approximate Bayesian computations suggested several evolutionary trajectories from ancient isolation in glacial refugia during the Pleistocene, with biogeographical signatures of recent expansion probably confounded by human-mediated mixing of different lineages. From an evolutionary and conservation perspective, this study highlights the ecological value of industrial areas, provided that ongoing regional gene flow is ensured within the existing lineage boundaries. © 2017 John Wiley & Sons Ltd.

  4. A portrait of a sucker using landscape genetics: how colonization and life history undermine the idealized dendritic metapopulation.

    PubMed

    Salisbury, Sarah J; McCracken, Gregory R; Keefe, Donald; Perry, Robert; Ruzzante, Daniel E

    2016-09-01

    Dendritic metapopulations have been attributed unique properties by in silico studies, including an elevated genetic diversity relative to a panmictic population of equal total size. These predictions have not been rigorously tested in nature, nor has there been full consideration of the interacting effects among contemporary landscape features, colonization history and life history traits of the target species. We tested for the effects of dendritic structure as well as the relative importance of life history, environmental barriers and historical colonization on the neutral genetic structure of a longnose sucker (Catostomus catostomus) metapopulation in the Kogaluk watershed of northern Labrador, Canada. Samples were collected from eight lakes, genotyped with 17 microsatellites, and aged using opercula. Lakes varied in differentiation, historical and contemporary connectivity, and life history traits. Isolation by distance was detected only by removing two highly genetically differentiated lakes, suggesting a lack of migration-drift equilibrium and the lingering influence of historical factors on genetic structure. Bayesian analyses supported colonization via the Kogaluk's headwaters. The historical concentration of genetic diversity in headwaters inferred by this result was supported by high historical and contemporary effective sizes of the headwater lake, T-Bone. Alternatively, reduced allelic richness in headwaters confirmed the dendritic structure's influence on gene flow, but this did not translate to an elevated metapopulation effective size. A lack of equilibrium and upstream migration may have dampened the effects of dendritic structure. We suggest that interacting historical and contemporary factors prevent the achievement of the idealized traits of a dendritic metapopulation in nature. © 2016 John Wiley & Sons Ltd.

  5. 76 FR 49751 - Perfect Fitness, Provisional Acceptance of a Settlement Agreement and Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-11

    ... CONSUMER PRODUCT SAFETY COMMISSION [CPSC Docket No. 11-C0009] Perfect Fitness, Provisional...(e). Published below is a provisionally-accepted Settlement Agreement with Perfect Fitness... accordance with 16 CFR 1118.20, Perfect Fitness and staff (``Staff'') of the United States Consumer Product...

  6. Parallelization of Lower-Upper Symmetric Gauss-Seidel Method for Chemically Reacting Flow

    NASA Technical Reports Server (NTRS)

    Yoon, Seokkwan; Jost, Gabriele; Chang, Sherry

    2005-01-01

    Development of technologies for exploration of the solar system has revived an interest in computational simulation of chemically reacting flows since planetary probe vehicles exhibit non-equilibrium phenomena during the atmospheric entry of a planet or a moon as well as the reentry to the Earth. Stability in combustion is essential for new propulsion systems. Numerical solution of real-gas flows often increases computational work by an order-of-magnitude compared to perfect gas flow partly because of the increased complexity of equations to solve. Recently, as part of Project Columbia, NASA has integrated a cluster of interconnected SGI Altix systems to provide a ten-fold increase in current supercomputing capacity that includes an SGI Origin system. Both the new and existing machines are based on cache coherent non-uniform memory access architecture. Lower-Upper Symmetric Gauss-Seidel (LU-SGS) relaxation method has been implemented into both perfect and real gas flow codes including Real-Gas Aerodynamic Simulator (RGAS). However, the vectorized RGAS code runs inefficiently on cache-based shared-memory machines such as SGI system. Parallelization of a Gauss-Seidel method is nontrivial due to its sequential nature. The LU-SGS method has been vectorized on an oblique plane in INS3D-LU code that has been one of the base codes for NAS Parallel benchmarks. The oblique plane has been called a hyperplane by computer scientists. It is straightforward to parallelize a Gauss-Seidel method by partitioning the hyperplanes once they are formed. Another way of parallelization is to schedule processors like a pipeline using software. Both hyperplane and pipeline methods have been implemented using openMP directives. The present paper reports the performance of the parallelized RGAS code on SGI Origin and Altix systems.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, B.; The Peac Institute of Multiscale Sciences, Chengdu, Sichuan 610207; Wang, L.

    With large-scale molecular dynamics simulations, we investigate shock response of He nanobubbles in single crystal Cu. For sufficient bubble size or internal pressure, a prismatic dislocation loop may form around a bubble in unshocked Cu. The internal He pressure helps to stabilize the bubble against plastic deformation. However, the prismatic dislocation loops may partially heal but facilitate nucleation of new shear and prismatic dislocation loops. For strong shocks, the internal pressure also impedes internal jetting, while a bubble assists local melting; a high speed jet breaks a He bubble into pieces dispersed among Cu. Near-surface He bubbles may burst andmore » form high velocity ejecta containing atoms and small fragments, while the ejecta velocities do not follow the three-dimensional Maxwell-Boltzmann distributions expected for thermal equilibrium. The biggest fragment size deceases with increasing shock strength. With a decrease in ligament thickness or an increase in He bubble size, the critical shock strength required for bubble bursting decreases, while the velocity range, space extension and average velocity component along the shock direction, increase. Small bubbles are more efficient in mass ejecting. Compared to voids and perfect single crystal Cu, He bubbles have pronounced effects on shock response including bubble/void collapse, Hugoniot elastic limit (HEL), deformation mechanisms, and surface jetting. HEL is the highest for perfect single crystal Cu with the same orientations, followed by He bubbles without pre-existing prismatic dislocation loops, and then voids. Complete void collapse and shear dislocations occur for embedded voids, as opposed to partial collapse, and shear and possibly prismatic dislocations for He bubbles. He bubbles lower the threshhold shock strength for ejecta formation, and increase ejecta velocity and ejected mass.« less

  8. Bayesian just-so stories in psychology and neuroscience.

    PubMed

    Bowers, Jeffrey S; Davis, Colin J

    2012-05-01

    According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak. This weakness relates to the many arbitrary ways that priors, likelihoods, and utility functions can be altered in order to account for the data that are obtained, making the models unfalsifiable. It further relates to the fact that Bayesian theories are rarely better at predicting data compared with alternative (and simpler) non-Bayesian theories. Second, we show that the empirical evidence for Bayesian theories in neuroscience is weaker still. There are impressive mathematical analyses showing how populations of neurons could compute in a Bayesian manner but little or no evidence that they do. Third, we challenge the general scientific approach that characterizes Bayesian theorizing in cognitive science. A common premise is that theories in psychology should largely be constrained by a rational analysis of what the mind ought to do. We question this claim and argue that many of the important constraints come from biological, evolutionary, and processing (algorithmic) considerations that have no adaptive relevance to the problem per se. In our view, these factors have contributed to the development of many Bayesian "just so" stories in psychology and neuroscience; that is, mathematical analyses of cognition that can be used to explain almost any behavior as optimal. 2012 APA, all rights reserved.

  9. Bayesian models: A statistical primer for ecologists

    USGS Publications Warehouse

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  10. Are there really no evolutionarily stable strategies in the iterated prisoner's dilemma?

    PubMed

    Lorberbaum, Jeffrey P; Bohning, Daryl E; Shastri, Ananda; Sine, Lauren E

    2002-01-21

    The evolutionary form of the iterated prisoner's dilemma (IPD) is a repeated game where players strategically choose whether to cooperate with or exploit opponents and reproduce in proportion to game success. It has been widely used to study the evolution of cooperation among selfish agents. In the past 15 years, researchers proved over a series of papers that there is no evolutionarily stable strategy (ESS) in the IPD when players maintain long-term relationships. This makes it difficult to make predictions about what strategies can actually persist as prevalent in a population over time. Here, we show that this no ESS finding may be a mathematical technicality, relying on implausible players who are "too perfect" in that their probability of cooperating on any move is arbitrarily close to either 0 or 1. Specifically, in the no ESS proof, all strategies were allowed, meaning that after a strategy X experiences any history H, X cooperates with an unrestricted probability p (X, H) where 0< or =p (X, H)< or =1. Here, we restrict strategies to the set S in which X is a member of S [corrected] if after any H, X cooperates with a restricted probability p (X, H) where e< or =p (X, H)< or =1-e and 0

  11. Social Interactions under Incomplete Information: Games, Equilibria, and Expectations

    NASA Astrophysics Data System (ADS)

    Yang, Chao

    My dissertation research investigates interactions of agents' behaviors through social networks when some information is not shared publicly, focusing on solutions to a series of challenging problems in empirical research, including heterogeneous expectations and multiple equilibria. The first chapter, "Social Interactions under Incomplete Information with Heterogeneous Expectations", extends the current literature in social interactions by devising econometric models and estimation tools with private information in not only the idiosyncratic shocks but also some exogenous covariates. For example, when analyzing peer effects in class performances, it was previously assumed that all control variables, including individual IQ and SAT scores, are known to the whole class, which is unrealistic. This chapter allows such exogenous variables to be private information and models agents' behaviors as outcomes of a Bayesian Nash Equilibrium in an incomplete information game. The distribution of equilibrium outcomes can be described by the equilibrium conditional expectations, which is unique when the parameters are within a reasonable range according to the contraction mapping theorem in function spaces. The equilibrium conditional expectations are heterogeneous in both exogenous characteristics and the private information, which makes estimation in this model more demanding than in previous ones. This problem is solved in a computationally efficient way by combining the quadrature method and the nested fixed point maximum likelihood estimation. In Monte Carlo experiments, if some exogenous characteristics are private information and the model is estimated under the mis-specified hypothesis that they are known to the public, estimates will be biased. Applying this model to municipal public spending in North Carolina, significant negative correlations between contiguous municipalities are found, showing free-riding effects. The Second chapter "A Tobit Model with Social Interactions under Incomplete Information", is an application of the first chapter to censored outcomes, corresponding to the situation when agents" behaviors are subjected to some binding restrictions. In an interesting empirical analysis for property tax rates set by North Carolina municipal governments, it is found that there is a significant positive correlation among near-by municipalities. Additionally, some private information about its own residents is used by a municipal government to predict others' tax rates, which enriches current empirical work about tax competition. The third chapter, "Social Interactions under Incomplete Information with Multiple Equilibria", extends the first chapter by investigating effective estimation methods when the condition for a unique equilibrium may not be satisfied. With multiple equilibria, the previous model is incomplete due to the unobservable equilibrium selection. Neither conventional likelihoods nor moment conditions can be used to estimate parameters without further specifications. Although there are some solutions to this issue in the current literature, they are based on strong assumptions such as agents with the same observable characteristics play the same strategy. This paper relaxes those assumptions and extends the all-solution method used to estimate discrete choice games to a setting with both discrete and continuous choices, bounded and unbounded outcomes, and a general form of incomplete information, where the existence of a pure strategy equilibrium has been an open question for a long time. By the use of differential topology and functional analysis, it is found that when all exogenous characteristics are public information, there are a finite number of equilibria. With privately known exogenous characteristics, the equilbria can be represented by a compact set in a Banach space and be approximated by a finite set. As a result, a finite-state probability mass function can be used to specify a probability measure for equilibrium selection, which completes the model. From Monte Carlo experiments about two types of binary choice models, it is found that assuming equilibrium uniqueness can bring in estimation biases when the true value of interaction intensity is large and there are multiple equilibria in the data generating process.

  12. How the Bayesians Got Their Beliefs (and What Those Beliefs Actually Are): Comment on Bowers and Davis (2012)

    ERIC Educational Resources Information Center

    Griffiths, Thomas L.; Chater, Nick; Norris, Dennis; Pouget, Alexandre

    2012-01-01

    Bowers and Davis (2012) criticize Bayesian modelers for telling "just so" stories about cognition and neuroscience. Their criticisms are weakened by not giving an accurate characterization of the motivation behind Bayesian modeling or the ways in which Bayesian models are used and by not evaluating this theoretical framework against specific…

  13. Construction of monitoring model and algorithm design on passenger security during shipping based on improved Bayesian network.

    PubMed

    Wang, Jiali; Zhang, Qingnian; Ji, Wenfeng

    2014-01-01

    A large number of data is needed by the computation of the objective Bayesian network, but the data is hard to get in actual computation. The calculation method of Bayesian network was improved in this paper, and the fuzzy-precise Bayesian network was obtained. Then, the fuzzy-precise Bayesian network was used to reason Bayesian network model when the data is limited. The security of passengers during shipping is affected by various factors, and it is hard to predict and control. The index system that has the impact on the passenger safety during shipping was established on basis of the multifield coupling theory in this paper. Meanwhile, the fuzzy-precise Bayesian network was applied to monitor the security of passengers in the shipping process. The model was applied to monitor the passenger safety during shipping of a shipping company in Hainan, and the effectiveness of this model was examined. This research work provides guidance for guaranteeing security of passengers during shipping.

  14. Bayesian model reduction and empirical Bayes for group (DCM) studies

    PubMed Central

    Friston, Karl J.; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E.; van Wijk, Bernadette C.M.; Ziegler, Gabriel; Zeidman, Peter

    2016-01-01

    This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level – e.g., dynamic causal models – and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. PMID:26569570

  15. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  16. Construction of Monitoring Model and Algorithm Design on Passenger Security during Shipping Based on Improved Bayesian Network

    PubMed Central

    Wang, Jiali; Zhang, Qingnian; Ji, Wenfeng

    2014-01-01

    A large number of data is needed by the computation of the objective Bayesian network, but the data is hard to get in actual computation. The calculation method of Bayesian network was improved in this paper, and the fuzzy-precise Bayesian network was obtained. Then, the fuzzy-precise Bayesian network was used to reason Bayesian network model when the data is limited. The security of passengers during shipping is affected by various factors, and it is hard to predict and control. The index system that has the impact on the passenger safety during shipping was established on basis of the multifield coupling theory in this paper. Meanwhile, the fuzzy-precise Bayesian network was applied to monitor the security of passengers in the shipping process. The model was applied to monitor the passenger safety during shipping of a shipping company in Hainan, and the effectiveness of this model was examined. This research work provides guidance for guaranteeing security of passengers during shipping. PMID:25254227

  17. Philosophy and the practice of Bayesian statistics

    PubMed Central

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2015-01-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. PMID:22364575

  18. Philosophy and the practice of Bayesian statistics.

    PubMed

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2013-02-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. © 2012 The British Psychological Society.

  19. Bayesian statistics in medicine: a 25 year review.

    PubMed

    Ashby, Deborah

    2006-11-15

    This review examines the state of Bayesian thinking as Statistics in Medicine was launched in 1982, reflecting particularly on its applicability and uses in medical research. It then looks at each subsequent five-year epoch, with a focus on papers appearing in Statistics in Medicine, putting these in the context of major developments in Bayesian thinking and computation with reference to important books, landmark meetings and seminal papers. It charts the growth of Bayesian statistics as it is applied to medicine and makes predictions for the future. From sparse beginnings, where Bayesian statistics was barely mentioned, Bayesian statistics has now permeated all the major areas of medical statistics, including clinical trials, epidemiology, meta-analyses and evidence synthesis, spatial modelling, longitudinal modelling, survival modelling, molecular genetics and decision-making in respect of new technologies.

  20. With or without you: predictive coding and Bayesian inference in the brain

    PubMed Central

    Aitchison, Laurence; Lengyel, Máté

    2018-01-01

    Two theoretical ideas have emerged recently with the ambition to provide a unifying functional explanation of neural population coding and dynamics: predictive coding and Bayesian inference. Here, we describe the two theories and their combination into a single framework: Bayesian predictive coding. We clarify how the two theories can be distinguished, despite sharing core computational concepts and addressing an overlapping set of empirical phenomena. We argue that predictive coding is an algorithmic / representational motif that can serve several different computational goals of which Bayesian inference is but one. Conversely, while Bayesian inference can utilize predictive coding, it can also be realized by a variety of other representations. We critically evaluate the experimental evidence supporting Bayesian predictive coding and discuss how to test it more directly. PMID:28942084

  1. Bayesian survival analysis in clinical trials: What methods are used in practice?

    PubMed

    Brard, Caroline; Le Teuff, Gwénaël; Le Deley, Marie-Cécile; Hampson, Lisa V

    2017-02-01

    Background Bayesian statistics are an appealing alternative to the traditional frequentist approach to designing, analysing, and reporting of clinical trials, especially in rare diseases. Time-to-event endpoints are widely used in many medical fields. There are additional complexities to designing Bayesian survival trials which arise from the need to specify a model for the survival distribution. The objective of this article was to critically review the use and reporting of Bayesian methods in survival trials. Methods A systematic review of clinical trials using Bayesian survival analyses was performed through PubMed and Web of Science databases. This was complemented by a full text search of the online repositories of pre-selected journals. Cost-effectiveness, dose-finding studies, meta-analyses, and methodological papers using clinical trials were excluded. Results In total, 28 articles met the inclusion criteria, 25 were original reports of clinical trials and 3 were re-analyses of a clinical trial. Most trials were in oncology (n = 25), were randomised controlled (n = 21) phase III trials (n = 13), and half considered a rare disease (n = 13). Bayesian approaches were used for monitoring in 14 trials and for the final analysis only in 14 trials. In the latter case, Bayesian survival analyses were used for the primary analysis in four cases, for the secondary analysis in seven cases, and for the trial re-analysis in three cases. Overall, 12 articles reported fitting Bayesian regression models (semi-parametric, n = 3; parametric, n = 9). Prior distributions were often incompletely reported: 20 articles did not define the prior distribution used for the parameter of interest. Over half of the trials used only non-informative priors for monitoring and the final analysis (n = 12) when it was specified. Indeed, no articles fitting Bayesian regression models placed informative priors on the parameter of interest. The prior for the treatment effect was based on historical data in only four trials. Decision rules were pre-defined in eight cases when trials used Bayesian monitoring, and in only one case when trials adopted a Bayesian approach to the final analysis. Conclusion Few trials implemented a Bayesian survival analysis and few incorporated external data into priors. There is scope to improve the quality of reporting of Bayesian methods in survival trials. Extension of the Consolidated Standards of Reporting Trials statement for reporting Bayesian clinical trials is recommended.

  2. Studies in the Theory of Quantum Games

    NASA Astrophysics Data System (ADS)

    Iqbal, Azhar

    2005-03-01

    Theory of quantum games is a new area of investigation that has gone through rapid development during the last few years. Initial motivation for playing games, in the quantum world, comes from the possibility of re-formulating quantum communication protocols, and algorithms, in terms of games between quantum and classical players. The possibility led to the view that quantum games have a potential to provide helpful insight into working of quantum algorithms, and even in finding new ones. This thesis analyzes and compares some interesting games when played classically and quantum mechanically. A large part of the thesis concerns investigations into a refinement notion of the Nash equilibrium concept. The refinement, called an evolutionarily stable strategy (ESS), was originally introduced in 1970s by mathematical biologists to model an evolving population using techniques borrowed from game theory. Analysis is developed around a situation when quantization changes ESSs without affecting corresponding Nash equilibria. Effects of quantization on solution-concepts other than Nash equilibrium are presented and discussed. For this purpose the notions of value of coalition, backwards-induction outcome, and subgame-perfect outcome are selected. Repeated games are known to have different information structure than one-shot games. Investigation is presented into a possible way where quantization changes the outcome of a repeated game. Lastly, two new suggestions are put forward to play quantum versions of classical matrix games. The first one uses the association of De Broglie's waves, with travelling material objects, as a resource for playing a quantum game. The second suggestion concerns an EPR type setting exploiting directly the correlations in Bell's inequalities to play a bi-matrix game.

  3. Emerging Insights into Directed Assembly: Taking Examples from Nature to Design Synthetic Processes

    NASA Astrophysics Data System (ADS)

    de Pablo, Juan J.

    There is considerable interest in controlling the assembly of polymeric material in order to create highly ordered materials for applications. Such materials are often trapped in metastable, non-equilibrium states, and the processes through which they assemble become an important aspect of the materials design strategy. An example is provided by di-block copolymer directed self-assembly, where a decade of work has shown that, through careful choice of process variables, it is possible to create ordered structures whose degree of perfection meets the constraints of commercial semiconductor manufacturing. As impactful as that work has been, it has focused on relatively simple materials neutral polymers, consisting of two or at most three blocks. Furthermore, the samples that have been produced have been limited to relatively thin films, and the assembly has been carried out on ideal, two-dimensional substrates. The question that arises now is whether one can translate those achievements to polymeric materials having a richer sequence, to monomers that include charges, to three-dimensional substrates, or to active systems that are in a permanent non-equilibrium state. Building on discoveries from the biophysics literature, this presentation will review recent work from our group and others that explains how nature has evolved to direct the assembly of nucleic acids into intricate, fully three-dimensional macroscopic functional materials that are not only active, but also responsive to external cues. We will discuss how principles from polymer physics serve to explain those assemblies, and how one might design a new generation of synthetic systems that incorporate some of those principles.

  4. An effective medium approach to predict the apparent contact angle of drops on super-hydrophobic randomly rough surfaces.

    PubMed

    Bottiglione, F; Carbone, G

    2015-01-14

    The apparent contact angle of large 2D drops with randomly rough self-affine profiles is numerically investigated. The numerical approach is based upon the assumption of large separation of length scales, i.e. it is assumed that the roughness length scales are much smaller than the drop size, thus making it possible to treat the problem through a mean-field like approach relying on the large-separation of scales. The apparent contact angle at equilibrium is calculated in all wetting regimes from full wetting (Wenzel state) to partial wetting (Cassie state). It was found that for very large values of the roughness Wenzel parameter (r(W) > -1/ cos θ(Y), where θ(Y) is the Young's contact angle), the interface approaches the perfect non-wetting condition and the apparent contact angle is almost equal to 180°. The results are compared with the case of roughness on one single scale (sinusoidal surface) and it is found that, given the same value of the Wenzel roughness parameter rW, the apparent contact angle is much larger for the case of a randomly rough surface, proving that the multi-scale character of randomly rough surfaces is a key factor to enhance superhydrophobicity. Moreover, it is shown that for millimetre-sized drops, the actual drop pressure at static equilibrium weakly affects the wetting regime, which instead seems to be dominated by the roughness parameter. For this reason a methodology to estimate the apparent contact angle is proposed, which relies only upon the micro-scale properties of the rough surface.

  5. Equilibria of oligomeric proteins under high pressure - A theoretical description.

    PubMed

    Ingr, Marek; Kutálková, Eva; Hrnčiřík, Josef; Lange, Reinhard

    2016-12-21

    High pressure methods have become a useful tool for studying protein structure and stability. Using them, various physico-chemical processes including protein unfolding, aggregation, oligomer dissociation or enzyme-activity decrease were studied on many different proteins. Oligomeric protein dissociation is a process that can perfectly utilize the potential of high-pressure techniques, as the high pressure shifts the equilibria to higher concentrations making them better observable by spectroscopic methods. This can be especially useful when the oligomeric form is highly stable at atmospheric pressure. These applications may be, however, hindered by less intensive experimental response as well as interference of the oligomerization equilibria with unfolding or aggregation of the subunits, but also by more complex theoretical description. In this study we develop mathematical models describing different kinds of oligomerization equilibria, both closed (equilibrium of monomer and the highest possible oligomer without any intermediates) and consecutive. Closed homooligomer equilibria are discussed for any oligomerization degree, while the more complex heterooligomer equilibria and the consecutive equilibria in both homo- and heterooligomers are taken into account only for dimers and trimers. In all the cases, fractions of all the relevant forms are evaluated as functions of pressure and concentration. Significant points (inflection points and extremes) of the resulting transition curves, that can be determined experimentally, are evaluated as functions of pressure and/or concentration. These functions can be further used in order to evaluate the thermodynamic parameters of the system, i.e. atmospheric-pressure equilibrium constants and volume changes of the individual steps of the oligomer-dissociation processes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Feedback Controlled Colloidal Assembly at Fluid Interfaces

    NASA Astrophysics Data System (ADS)

    Bevan, Michael

    The autonomous and reversible assembly of colloidal nano- and micro- scale components into ordered configurations is often suggested as a scalable process capable of manufacturing meta-materials with exotic electromagnetic properties. As a result, there is strong interest in understanding how thermal motion, particle interactions, patterned surfaces, and external fields can be optimally coupled to robustly control the assembly of colloidal components into hierarchically structured functional meta-materials. We approach this problem by directly relating equilibrium and dynamic colloidal microstructures to kT-scale energy landscapes mediated by colloidal forces, physically and chemically patterned surfaces, multiphase fluid interfaces, and electromagnetic fields. 3D colloidal trajectories are measured in real-space and real-time with nanometer resolution using an integrated suite of evanescent wave, video, and confocal microscopy methods. Equilibrium structures are connected to energy landscapes via statistical mechanical models. The dynamic evolution of initially disordered colloidal fluid configurations into colloidal crystals in the presence of tunable interactions (electromagnetic field mediated interactions, particle-interface interactions) is modeled using a novel approach based on fitting the Fokker-Planck equation to experimental microscopy and computer simulated assembly trajectories. This approach is based on the use of reaction coordinates that capture important microstructural features of crystallization processes and quantify both statistical mechanical (free energy) and fluid mechanical (hydrodynamic) contributions. Ultimately, we demonstrate real-time control of assembly, disassembly, and repair of colloidal crystals using both open loop and closed loop control to produce perfectly ordered colloidal microstructures. This approach is demonstrated for close packed colloidal crystals of spherical particles at fluid-solid interfaces and is being extended to anisotropic particles and multiphase fluid interfaces.

  7. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert elicitation methodology is developed and applied to the real-world test case in order to provide a road map for the use of fuzzy Bayesian inference in groundwater modeling applications.

  8. Contextual Factors in the Use of the Present Perfect

    ERIC Educational Resources Information Center

    Moy, Raymond H.

    1977-01-01

    In this study the inadequacies of rules governing the present perfect in isolated sentences are discussed and then two contextual factors thought to be connected with current relevance and the use of the present perfect are described. These factors are experimentally shown to influence use of the present perfect significantly. (CHK)

  9. A guide to Bayesian model selection for ecologists

    USGS Publications Warehouse

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  10. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  11. Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.

    PubMed

    Jones, Matt; Love, Bradley C

    2011-08-01

    The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology - namely, Behaviorism and evolutionary psychology - that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements.

  12. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  13. Prediction of Individual Serum Infliximab Concentrations in Inflammatory Bowel Disease by a Bayesian Dashboard System.

    PubMed

    Eser, Alexander; Primas, Christian; Reinisch, Sieglinde; Vogelsang, Harald; Novacek, Gottfried; Mould, Diane R; Reinisch, Walter

    2018-01-30

    Despite a robust exposure-response relationship of infliximab in inflammatory bowel disease (IBD), attempts to adjust dosing to individually predicted serum concentrations of infliximab (SICs) are lacking. Compared with labor-intensive conventional software for pharmacokinetic (PK) modeling (eg, NONMEM) dashboards are easy-to-use programs incorporating complex Bayesian statistics to determine individual pharmacokinetics. We evaluated various infliximab detection assays and the number of samples needed to precisely forecast individual SICs using a Bayesian dashboard. We assessed long-term infliximab retention in patients being dosed concordantly versus discordantly with Bayesian dashboard recommendations. Three hundred eighty-two serum samples from 117 adult IBD patients on infliximab maintenance therapy were analyzed by 3 commercially available assays. Data from each assay was modeled using NONMEM and a Bayesian dashboard. PK parameter precision and residual variability were assessed. Forecast concentrations from both systems were compared with observed concentrations. Infliximab retention was assessed by prediction for dose intensification via Bayesian dashboard versus real-life practice. Forecast precision of SICs varied between detection assays. At least 3 SICs from a reliable assay are needed for an accurate forecast. The Bayesian dashboard performed similarly to NONMEM to predict SICs. Patients dosed concordantly with Bayesian dashboard recommendations had a significantly longer median drug survival than those dosed discordantly (51.5 versus 4.6 months, P < .0001). The Bayesian dashboard helps to assess the diagnostic performance of infliximab detection assays. Three, not single, SICs provide sufficient information for individualized dose adjustment when incorporated into the Bayesian dashboard. Treatment adjusted to forecasted SICs is associated with longer drug retention of infliximab. © 2018, The American College of Clinical Pharmacology.

  14. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  15. Universal Darwinism As a Process of Bayesian Inference.

    PubMed

    Campbell, John O

    2016-01-01

    Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an "experiment" in the external world environment, and the results of that "experiment" or the "surprise" entailed by predicted and actual outcomes of the "experiment." Minimization of free energy implies that the implicit measure of "surprise" experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.

  16. Universal Darwinism As a Process of Bayesian Inference

    PubMed Central

    Campbell, John O.

    2016-01-01

    Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an “experiment” in the external world environment, and the results of that “experiment” or the “surprise” entailed by predicted and actual outcomes of the “experiment.” Minimization of free energy implies that the implicit measure of “surprise” experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature. PMID:27375438

  17. Development and comparison in uncertainty assessment based Bayesian modularization method in hydrological modeling

    NASA Astrophysics Data System (ADS)

    Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn

    2013-04-01

    SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.

  18. Enhancements to the Bayesian Infrasound Source Location Method

    DTIC Science & Technology

    2012-09-01

    ENHANCEMENTS TO THE BAYESIAN INFRASOUND SOURCE LOCATION METHOD Omar E. Marcillo, Stephen J. Arrowsmith, Rod W. Whitaker, and Dale N. Anderson Los...ABSTRACT We report on R&D that is enabling enhancements to the Bayesian Infrasound Source Location (BISL) method for infrasound event location...the Bayesian Infrasound Source Location Method 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER

  19. Bayesian ensemble refinement by replica simulations and reweighting.

    PubMed

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-28

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  20. Daniel Goodman’s empirical approach to Bayesian statistics

    USGS Publications Warehouse

    Gerrodette, Tim; Ward, Eric; Taylor, Rebecca L.; Schwarz, Lisa K.; Eguchi, Tomoharu; Wade, Paul; Himes Boor, Gina

    2016-01-01

    Bayesian statistics, in contrast to classical statistics, uses probability to represent uncertainty about the state of knowledge. Bayesian statistics has often been associated with the idea that knowledge is subjective and that a probability distribution represents a personal degree of belief. Dr. Daniel Goodman considered this viewpoint problematic for issues of public policy. He sought to ground his Bayesian approach in data, and advocated the construction of a prior as an empirical histogram of “similar” cases. In this way, the posterior distribution that results from a Bayesian analysis combined comparable previous data with case-specific current data, using Bayes’ formula. Goodman championed such a data-based approach, but he acknowledged that it was difficult in practice. If based on a true representation of our knowledge and uncertainty, Goodman argued that risk assessment and decision-making could be an exact science, despite the uncertainties. In his view, Bayesian statistics is a critical component of this science because a Bayesian analysis produces the probabilities of future outcomes. Indeed, Goodman maintained that the Bayesian machinery, following the rules of conditional probability, offered the best legitimate inference from available data. We give an example of an informative prior in a recent study of Steller sea lion spatial use patterns in Alaska.

  1. Bayesian ensemble refinement by replica simulations and reweighting

    NASA Astrophysics Data System (ADS)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  2. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2014-10-01

    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.

  3. High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik

    2014-11-01

    This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.

  4. The Chandra Source Catalog 2.0: Early Cross-matches

    NASA Astrophysics Data System (ADS)

    Rots, Arnold H.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    Cross-matching the Chandra Source Catalog (CSC) with other catalogs presents considerable challenges, since the Point Spread Function (PSF) of the Chandra X-ray Observatory varies significantly over the field of view. For the second release of the CSC (CSC2) we have been developing a cross-match tool that is based on the Bayesian algorithms by Budavari, Heinis, and Szalay (ApJ 679, 301 and 705, 739), making use of the error ellipses for the derived positions of the sources.However, calculating match probabilities only on the basis of error ellipses breaks down when the PSFs are significantly different. Not only can bonafide matches easily be missed, but the scene is also muddied by ambiguous multiple matches. These are issues that are not commonly addressed in cross-match tools. We have applied a satisfactory modification to the algorithm that, although not perfect, ameliorates the problems for the vast majority of such cases.We will present some early cross-matches of the CSC2 catalog with obvious candidate catalogs and report on the determination of the absolute astrometric error of the CSC2 based on such cross-matches.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  5. Temporally consistent probabilistic detection of new multiple sclerosis lesions in brain MRI.

    PubMed

    Elliott, Colm; Arnold, Douglas L; Collins, D Louis; Arbel, Tal

    2013-08-01

    Detection of new Multiple Sclerosis (MS) lesions on magnetic resonance imaging (MRI) is important as a marker of disease activity and as a potential surrogate for relapses. We propose an approach where sequential scans are jointly segmented, to provide a temporally consistent tissue segmentation while remaining sensitive to newly appearing lesions. The method uses a two-stage classification process: 1) a Bayesian classifier provides a probabilistic brain tissue classification at each voxel of reference and follow-up scans, and 2) a random-forest based lesion-level classification provides a final identification of new lesions. Generative models are learned based on 364 scans from 95 subjects from a multi-center clinical trial. The method is evaluated on sequential brain MRI of 160 subjects from a separate multi-center clinical trial, and is compared to 1) semi-automatically generated ground truth segmentations and 2) fully manual identification of new lesions generated independently by nine expert raters on a subset of 60 subjects. For new lesions greater than 0.15 cc in size, the classifier has near perfect performance (99% sensitivity, 2% false detection rate), as compared to ground truth. The proposed method was also shown to exceed the performance of any one of the nine expert manual identifications.

  6. The Perfect Aspect as a State of Being.

    ERIC Educational Resources Information Center

    Moy, Raymond H.

    English as second language (ESL) learners often avoid using the present perfect or use it improperly. In contrast with native speakers of English sampled from newspaper editorials, of whom 75 percent used the present perfect, only 22 percent of ESL college students used the present perfect correctly. This avoidance is due in part to lack of…

  7. Bayesian model reduction and empirical Bayes for group (DCM) studies.

    PubMed

    Friston, Karl J; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E; van Wijk, Bernadette C M; Ziegler, Gabriel; Zeidman, Peter

    2016-03-01

    This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level - e.g., dynamic causal models - and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Estimating virus occurrence using Bayesian modeling in multiple drinking water systems of the United States

    USGS Publications Warehouse

    Varughese, Eunice A.; Brinkman, Nichole E; Anneken, Emily M; Cashdollar, Jennifer S; Fout, G. Shay; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.; Keely, Scott P

    2017-01-01

    incorporated into a Bayesian model to more accurately determine viral load in both source and treated water. Results of the Bayesian model indicated that viruses are present in source water and treated water. By using a Bayesian framework that incorporates inhibition, as well as many other parameters that affect viral detection, this study offers an approach for more accurately estimating the occurrence of viral pathogens in environmental waters.

  9. The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.

    PubMed

    Norris, Dennis

    2006-04-01

    This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).

  10. Power in Bayesian Mediation Analysis for Small Sample Research

    PubMed Central

    Miočević, Milica; MacKinnon, David P.; Levy, Roy

    2018-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results. PMID:29662296

  11. Power in Bayesian Mediation Analysis for Small Sample Research.

    PubMed

    Miočević, Milica; MacKinnon, David P; Levy, Roy

    2017-01-01

    It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results.

  12. A Primer on Bayesian Analysis for Experimental Psychopathologists

    PubMed Central

    Krypotos, Angelos-Miltiadis; Blanken, Tessa F.; Arnaudova, Inna; Matzke, Dora; Beckers, Tom

    2016-01-01

    The principal goals of experimental psychopathology (EPP) research are to offer insights into the pathogenic mechanisms of mental disorders and to provide a stable ground for the development of clinical interventions. The main message of the present article is that those goals are better served by the adoption of Bayesian statistics than by the continued use of null-hypothesis significance testing (NHST). In the first part of the article we list the main disadvantages of NHST and explain why those disadvantages limit the conclusions that can be drawn from EPP research. Next, we highlight the advantages of Bayesian statistics. To illustrate, we then pit NHST and Bayesian analysis against each other using an experimental data set from our lab. Finally, we discuss some challenges when adopting Bayesian statistics. We hope that the present article will encourage experimental psychopathologists to embrace Bayesian statistics, which could strengthen the conclusions drawn from EPP research. PMID:28748068

  13. Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan

    NASA Astrophysics Data System (ADS)

    Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung

    2010-08-01

    Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.

  14. A default Bayesian hypothesis test for mediation.

    PubMed

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  15. Is probabilistic bias analysis approximately Bayesian?

    PubMed Central

    MacLehose, Richard F.; Gustafson, Paul

    2011-01-01

    Case-control studies are particularly susceptible to differential exposure misclassification when exposure status is determined following incident case status. Probabilistic bias analysis methods have been developed as ways to adjust standard effect estimates based on the sensitivity and specificity of exposure misclassification. The iterative sampling method advocated in probabilistic bias analysis bears a distinct resemblance to a Bayesian adjustment; however, it is not identical. Furthermore, without a formal theoretical framework (Bayesian or frequentist), the results of a probabilistic bias analysis remain somewhat difficult to interpret. We describe, both theoretically and empirically, the extent to which probabilistic bias analysis can be viewed as approximately Bayesian. While the differences between probabilistic bias analysis and Bayesian approaches to misclassification can be substantial, these situations often involve unrealistic prior specifications and are relatively easy to detect. Outside of these special cases, probabilistic bias analysis and Bayesian approaches to exposure misclassification in case-control studies appear to perform equally well. PMID:22157311

  16. Computation of Thermally Perfect Compressible Flow Properties

    NASA Technical Reports Server (NTRS)

    Witte, David W.; Tatum, Kenneth E.; Williams, S. Blake

    1996-01-01

    A set of compressible flow relations for a thermally perfect, calorically imperfect gas are derived for a value of c(sub p) (specific heat at constant pressure) expressed as a polynomial function of temperature and developed into a computer program, referred to as the Thermally Perfect Gas (TPG) code. The code is available free from the NASA Langley Software Server at URL http://www.larc.nasa.gov/LSS. The code produces tables of compressible flow properties similar to those found in NACA Report 1135. Unlike the NACA Report 1135 tables which are valid only in the calorically perfect temperature regime the TPG code results are also valid in the thermally perfect, calorically imperfect temperature regime, giving the TPG code a considerably larger range of temperature application. Accuracy of the TPG code in the calorically perfect and in the thermally perfect, calorically imperfect temperature regimes are verified by comparisons with the methods of NACA Report 1135. The advantages of the TPG code compared to the thermally perfect, calorically imperfect method of NACA Report 1135 are its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture of gases, ease-of-use, and tabulated results.

  17. Fundamentals and Recent Developments in Approximate Bayesian Computation

    PubMed Central

    Lintusaari, Jarno; Gutmann, Michael U.; Dutta, Ritabrata; Kaski, Samuel; Corander, Jukka

    2017-01-01

    Abstract Bayesian inference plays an important role in phylogenetics, evolutionary biology, and in many other branches of science. It provides a principled framework for dealing with uncertainty and quantifying how it changes in the light of new evidence. For many complex models and inference problems, however, only approximate quantitative answers are obtainable. Approximate Bayesian computation (ABC) refers to a family of algorithms for approximate inference that makes a minimal set of assumptions by only requiring that sampling from a model is possible. We explain here the fundamentals of ABC, review the classical algorithms, and highlight recent developments. [ABC; approximate Bayesian computation; Bayesian inference; likelihood-free inference; phylogenetics; simulator-based models; stochastic simulation models; tree-based models.] PMID:28175922

  18. Bayesian multimodel inference for dose-response studies

    USGS Publications Warehouse

    Link, W.A.; Albers, P.H.

    2007-01-01

    Statistical inference in dose?response studies is model-based: The analyst posits a mathematical model of the relation between exposure and response, estimates parameters of the model, and reports conclusions conditional on the model. Such analyses rarely include any accounting for the uncertainties associated with model selection. The Bayesian inferential system provides a convenient framework for model selection and multimodel inference. In this paper we briefly describe the Bayesian paradigm and Bayesian multimodel inference. We then present a family of models for multinomial dose?response data and apply Bayesian multimodel inferential methods to the analysis of data on the reproductive success of American kestrels (Falco sparveriuss) exposed to various sublethal dietary concentrations of methylmercury.

  19. CytoBayesJ: software tools for Bayesian analysis of cytogenetic radiation dosimetry data.

    PubMed

    Ainsbury, Elizabeth A; Vinnikov, Volodymyr; Puig, Pedro; Maznyk, Nataliya; Rothkamm, Kai; Lloyd, David C

    2013-08-30

    A number of authors have suggested that a Bayesian approach may be most appropriate for analysis of cytogenetic radiation dosimetry data. In the Bayesian framework, probability of an event is described in terms of previous expectations and uncertainty. Previously existing, or prior, information is used in combination with experimental results to infer probabilities or the likelihood that a hypothesis is true. It has been shown that the Bayesian approach increases both the accuracy and quality assurance of radiation dose estimates. New software entitled CytoBayesJ has been developed with the aim of bringing Bayesian analysis to cytogenetic biodosimetry laboratory practice. CytoBayesJ takes a number of Bayesian or 'Bayesian like' methods that have been proposed in the literature and presents them to the user in the form of simple user-friendly tools, including testing for the most appropriate model for distribution of chromosome aberrations and calculations of posterior probability distributions. The individual tools are described in detail and relevant examples of the use of the methods and the corresponding CytoBayesJ software tools are given. In this way, the suitability of the Bayesian approach to biological radiation dosimetry is highlighted and its wider application encouraged by providing a user-friendly software interface and manual in English and Russian. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. A Bayesian approach to meta-analysis of plant pathology studies.

    PubMed

    Mila, A L; Ngugi, H K

    2011-01-01

    Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework. Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.

  1. Bayesian Mediation Analysis

    ERIC Educational Resources Information Center

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  2. A molecular dynamics study of the effect of thermal boundary conductance on thermal transport of ideal crystal of n-alkanes with different number of carbon atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rastgarkafshgarkolaei, Rouzbeh; Zeng, Yi; Khodadadi, J. M., E-mail: khodajm@auburn.edu

    2016-05-28

    Phase change materials such as n-alkanes that exhibit desirable characteristics such as high latent heat, chemical stability, and negligible supercooling are widely used in thermal energy storage applications. However, n-alkanes have the drawback of low thermal conductivity values. The low thermal conductivity of n-alkanes is linked to formation of randomly oriented nano-domains of molecules in their solid structure that is responsible for excessive phonon scattering at the grain boundaries. Thus, understanding the thermal boundary conductance at the grain boundaries can be crucial for improving the effectiveness of thermal storage systems. The concept of the ideal crystal is proposed in thismore » paper, which describes a simplified model such that all the nano-domains of long-chain n-alkanes are artificially aligned perfectly in one direction. In order to study thermal transport of the ideal crystal of long-chain n-alkanes, four (4) systems (C{sub 20}H{sub 42}, C{sub 24}H{sub 50}, C{sub 26}H{sub 54}, and C{sub 30}H{sub 62}) are investigated by the molecular dynamics simulations. Thermal boundary conductance between the layers of ideal crystals is determined using both non-equilibrium molecular dynamics (NEMD) and equilibrium molecular dynamics (EMD) simulations. Both NEMD and EMD simulations exhibit no significant change in thermal conductance with the molecular length. However, the values obtained from the EMD simulations are less than the values from NEMD simulations with the ratio being nearly three (3) in most cases. This difference is due to the nature of EMD simulations where all the phonons are assumed to be in equilibrium at the interface. Thermal conductivity of the n-alkanes in three structures including liquid, solid, and ideal crystal is investigated utilizing NEMD simulations. Our results exhibit a very slight rise in thermal conductivity values as the number of carbon atoms of the chain increases. The key understanding is that thermal transport can be significantly altered by how the molecules and the nano-domains are oriented in the structure rather than by the length of the n-alkane molecules.« less

  3. PROBING X-RAY ABSORPTION AND OPTICAL EXTINCTION IN THE INTERSTELLAR MEDIUM USING CHANDRA OBSERVATIONS OF SUPERNOVA REMNANTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foight, Dillon R.; Slane, Patrick O.; Güver, Tolga

    We present a comprehensive study of interstellar X-ray extinction using the extensive Chandra supernova remnant (SNR) archive and use our results to refine the empirical relation between the hydrogen column density and optical extinction. In our analysis, we make use of the large, uniform data sample to assess various systematic uncertainties in the measurement of the interstellar X-ray absorption. Specifically, we address systematic uncertainties that originate from (i) the emission models used to fit SNR spectra; (ii) the spatial variations within individual remnants; (iii) the physical conditions of the remnant such as composition, temperature, and non-equilibrium regions; and (iv) themore » model used for the absorption of X-rays in the interstellar medium. Using a Bayesian framework to quantify these systematic uncertainties, and combining the resulting hydrogen column density measurements with the measurements of optical extinction toward the same remnants, we find the empirical relation N {sub H} = (2.87 ± 0.12) × 10{sup 21} A {sub V} cm{sup 2}, which is significantly higher than the previous measurements.« less

  4. The Dynamics of Democracy, Development and Cultural Values

    PubMed Central

    Spaiser, Viktoria; Ranganathan, Shyam; Mann, Richard P.; Sumpter, David J. T.

    2014-01-01

    Over the past decades many countries have experienced rapid changes in their economies, their democratic institutions and the values of their citizens. Comprehensive data measuring these changes across very different countries has recently become openly available. Between country similarities suggest common underlying dynamics in how countries develop in terms of economy, democracy and cultural values. We apply a novel Bayesian dynamical systems approach to identify the model which best captures the complex, mainly non-linear dynamics that underlie these changes. We show that the level of Human Development Index (HDI) in a country drives first democracy and then higher emancipation of citizens. This change occurs once the countries pass a certain threshold in HDI. The data also suggests that there is a limit to the growth of wealth, set by higher emancipation. Having reached a high level of democracy and emancipation, societies tend towards equilibrium that does not support further economic growth. Our findings give strong empirical evidence against a popular political science theory, known as the Human Development Sequence. Contrary to this theory, we find that implementation of human-rights and democratisation precede increases in emancipative values. PMID:24905920

  5. The dynamics of democracy, development and cultural values.

    PubMed

    Spaiser, Viktoria; Ranganathan, Shyam; Mann, Richard P; Sumpter, David J T

    2014-01-01

    Over the past decades many countries have experienced rapid changes in their economies, their democratic institutions and the values of their citizens. Comprehensive data measuring these changes across very different countries has recently become openly available. Between country similarities suggest common underlying dynamics in how countries develop in terms of economy, democracy and cultural values. We apply a novel Bayesian dynamical systems approach to identify the model which best captures the complex, mainly non-linear dynamics that underlie these changes. We show that the level of Human Development Index (HDI) in a country drives first democracy and then higher emancipation of citizens. This change occurs once the countries pass a certain threshold in HDI. The data also suggests that there is a limit to the growth of wealth, set by higher emancipation. Having reached a high level of democracy and emancipation, societies tend towards equilibrium that does not support further economic growth. Our findings give strong empirical evidence against a popular political science theory, known as the Human Development Sequence. Contrary to this theory, we find that implementation of human-rights and democratisation precede increases in emancipative values.

  6. Bayesian networks for maritime traffic accident prevention: benefits and challenges.

    PubMed

    Hänninen, Maria

    2014-12-01

    Bayesian networks are quantitative modeling tools whose applications to the maritime traffic safety context are becoming more popular. This paper discusses the utilization of Bayesian networks in maritime safety modeling. Based on literature and the author's own experiences, the paper studies what Bayesian networks can offer to maritime accident prevention and safety modeling and discusses a few challenges in their application to this context. It is argued that the capability of representing rather complex, not necessarily causal but uncertain relationships makes Bayesian networks an attractive modeling tool for the maritime safety and accidents. Furthermore, as the maritime accident and safety data is still rather scarce and has some quality problems, the possibility to combine data with expert knowledge and the easy way of updating the model after acquiring more evidence further enhance their feasibility. However, eliciting the probabilities from the maritime experts might be challenging and the model validation can be tricky. It is concluded that with the utilization of several data sources, Bayesian updating, dynamic modeling, and hidden nodes for latent variables, Bayesian networks are rather well-suited tools for the maritime safety management and decision-making. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  8. Bayesian methods including nonrandomized study data increased the efficiency of postlaunch RCTs.

    PubMed

    Schmidt, Amand F; Klugkist, Irene; Klungel, Olaf H; Nielen, Mirjam; de Boer, Anthonius; Hoes, Arno W; Groenwold, Rolf H H

    2015-04-01

    Findings from nonrandomized studies on safety or efficacy of treatment in patient subgroups may trigger postlaunch randomized clinical trials (RCTs). In the analysis of such RCTs, results from nonrandomized studies are typically ignored. This study explores the trade-off between bias and power of Bayesian RCT analysis incorporating information from nonrandomized studies. A simulation study was conducted to compare frequentist with Bayesian analyses using noninformative and informative priors in their ability to detect interaction effects. In simulated subgroups, the effect of a hypothetical treatment differed between subgroups (odds ratio 1.00 vs. 2.33). Simulations varied in sample size, proportions of the subgroups, and specification of the priors. As expected, the results for the informative Bayesian analyses were more biased than those from the noninformative Bayesian analysis or frequentist analysis. However, because of a reduction in posterior variance, informative Bayesian analyses were generally more powerful to detect an effect. In scenarios where the informative priors were in the opposite direction of the RCT data, type 1 error rates could be 100% and power 0%. Bayesian methods incorporating data from nonrandomized studies can meaningfully increase power of interaction tests in postlaunch RCTs. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. The image recognition based on neural network and Bayesian decision

    NASA Astrophysics Data System (ADS)

    Wang, Chugege

    2018-04-01

    The artificial neural network began in 1940, which is an important part of artificial intelligence. At present, it has become a hot topic in the fields of neuroscience, computer science, brain science, mathematics, and psychology. Thomas Bayes firstly reported the Bayesian theory in 1763. After the development in the twentieth century, it has been widespread in all areas of statistics. In recent years, due to the solution of the problem of high-dimensional integral calculation, Bayesian Statistics has been improved theoretically, which solved many problems that cannot be solved by classical statistics and is also applied to the interdisciplinary fields. In this paper, the related concepts and principles of the artificial neural network are introduced. It also summarizes the basic content and principle of Bayesian Statistics, and combines the artificial neural network technology and Bayesian decision theory and implement them in all aspects of image recognition, such as enhanced face detection method based on neural network and Bayesian decision, as well as the image classification based on the Bayesian decision. It can be seen that the combination of artificial intelligence and statistical algorithms has always been the hot research topic.

  10. Bayesian selection of misspecified models is overconfident and may cause spurious posterior probabilities for phylogenetic trees.

    PubMed

    Yang, Ziheng; Zhu, Tianqi

    2018-02-20

    The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.

  11. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.

    PubMed

    Kaplan, David; Chen, Jianshen

    2012-07-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.

  12. Integrated Taxonomy and DNA Barcoding of Alpine Midges (Diptera: Chironomidae)

    PubMed Central

    Montagna, Matteo; Mereghetti, Valeria; Lencioni, Valeria; Rossaro, Bruno

    2016-01-01

    Rapid and efficient DNA-based tools are recommended for the evaluation of the insect biodiversity of high-altitude streams. In the present study, focused principally on larvae of the genus Diamesa Meigen 1835 (Diptera: Chironomidae), the congruence between morphological/molecular delimitation of species as well as performances in taxonomic assignments were evaluated. A fragment of the mitochondrial cox1 gene was obtained from 112 larvae, pupae and adults (Diamesinae, Orthocladiinae and Tanypodinae) that were collected in different mountain regions of the Alps and Apennines. On the basis of morphological characters 102 specimens were attributed to 16 species, and the remaining ten specimens were identified to the genus level. Molecular species delimitation was performed using: i) distance-based Automatic Barcode Gap Discovery (ABGD), with no a priori assumptions on species identification; and ii) coalescent tree-based approaches as the Generalized Mixed Yule Coalescent model, its Bayesian implementation and Bayesian Poisson Tree Processes. The ABGD analysis, estimating an optimal intra/interspecific nucleotide distance threshold of 0.7%-1.4%, identified 23 putative species; the tree-based approaches, identified between 25–26 entities, provided nearly identical results. All species belonging to zernyi, steinboecki, latitarsis, bertrami, dampfi and incallida groups, as well as outgroup species, are recovered as separate entities, perfectly matching the identified morphospecies. In contrast, within the cinerella group, cases of discrepancy arose: i) the two morphologically separate species D. cinerella and D. tonsa are neither monophyletic nor diagnosable exhibiting low values of between-taxa nucleotide mean divergence (0.94%); ii) few cases of larvae morphological misidentification were observed. Head capsule color is confirmed to be a valid character able to discriminate larvae of D. zernyi, D. tonsa and D. cinerella, but it is here better defined as a color gradient between the setae submenti and genal setae. DNA barcodes performances were high: average accuracy was ~89% and precision of ~99%. On the basis of the present data, we can thus conclude that molecular identification represents a promising tool that could be effectively adopted in evaluating biodiversity of high-altitude streams. PMID:26938660

  13. Integrated Taxonomy and DNA Barcoding of Alpine Midges (Diptera: Chironomidae).

    PubMed

    Montagna, Matteo; Mereghetti, Valeria; Lencioni, Valeria; Rossaro, Bruno

    2016-01-01

    Rapid and efficient DNA-based tools are recommended for the evaluation of the insect biodiversity of high-altitude streams. In the present study, focused principally on larvae of the genus Diamesa Meigen 1835 (Diptera: Chironomidae), the congruence between morphological/molecular delimitation of species as well as performances in taxonomic assignments were evaluated. A fragment of the mitochondrial cox1 gene was obtained from 112 larvae, pupae and adults (Diamesinae, Orthocladiinae and Tanypodinae) that were collected in different mountain regions of the Alps and Apennines. On the basis of morphological characters 102 specimens were attributed to 16 species, and the remaining ten specimens were identified to the genus level. Molecular species delimitation was performed using: i) distance-based Automatic Barcode Gap Discovery (ABGD), with no a priori assumptions on species identification; and ii) coalescent tree-based approaches as the Generalized Mixed Yule Coalescent model, its Bayesian implementation and Bayesian Poisson Tree Processes. The ABGD analysis, estimating an optimal intra/interspecific nucleotide distance threshold of 0.7%-1.4%, identified 23 putative species; the tree-based approaches, identified between 25-26 entities, provided nearly identical results. All species belonging to zernyi, steinboecki, latitarsis, bertrami, dampfi and incallida groups, as well as outgroup species, are recovered as separate entities, perfectly matching the identified morphospecies. In contrast, within the cinerella group, cases of discrepancy arose: i) the two morphologically separate species D. cinerella and D. tonsa are neither monophyletic nor diagnosable exhibiting low values of between-taxa nucleotide mean divergence (0.94%); ii) few cases of larvae morphological misidentification were observed. Head capsule color is confirmed to be a valid character able to discriminate larvae of D. zernyi, D. tonsa and D. cinerella, but it is here better defined as a color gradient between the setae submenti and genal setae. DNA barcodes performances were high: average accuracy was ~89% and precision of ~99%. On the basis of the present data, we can thus conclude that molecular identification represents a promising tool that could be effectively adopted in evaluating biodiversity of high-altitude streams.

  14. Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management

    EPA Science Inventory

    A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...

  15. Advances in Bayesian Modeling in Educational Research

    ERIC Educational Resources Information Center

    Levy, Roy

    2016-01-01

    In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…

  16. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    EPA Science Inventory

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  17. A SEMIPARAMETRIC BAYESIAN MODEL FOR CIRCULAR-LINEAR REGRESSION

    EPA Science Inventory

    We present a Bayesian approach to regress a circular variable on a linear predictor. The regression coefficients are assumed to have a nonparametric distribution with a Dirichlet process prior. The semiparametric Bayesian approach gives added flexibility to the model and is usefu...

  18. Bayesian truthing as experimental verification of C4ISR sensors

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Forrester, Thomas; Romanov, Volodymyr; Wang, Wenjian; Nielsen, Thomas; Kostrzewski, Andrew

    2015-05-01

    In this paper, the general methodology for experimental verification/validation of C4ISR and other sensors' performance, is presented, based on Bayesian inference, in general, and binary sensors, in particular. This methodology, called Bayesian Truthing, defines Performance Metrics for binary sensors in: physics, optics, electronics, medicine, law enforcement, C3ISR, QC, ATR (Automatic Target Recognition), terrorism related events, and many others. For Bayesian Truthing, the sensing medium itself is not what is truly important; it is how the decision process is affected.

  19. Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology

    PubMed Central

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832

  20. Diagnostic accuracy of a bayesian latent group analysis for the detection of malingering-related poor effort.

    PubMed

    Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina

    2013-01-01

    In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.

  1. Quantum mechanics: The Bayesian theory generalized to the space of Hermitian matrices

    NASA Astrophysics Data System (ADS)

    Benavoli, Alessio; Facchini, Alessandro; Zaffalon, Marco

    2016-10-01

    We consider the problem of gambling on a quantum experiment and enforce rational behavior by a few rules. These rules yield, in the classical case, the Bayesian theory of probability via duality theorems. In our quantum setting, they yield the Bayesian theory generalized to the space of Hermitian matrices. This very theory is quantum mechanics: in fact, we derive all its four postulates from the generalized Bayesian theory. This implies that quantum mechanics is self-consistent. It also leads us to reinterpret the main operations in quantum mechanics as probability rules: Bayes' rule (measurement), marginalization (partial tracing), independence (tensor product). To say it with a slogan, we obtain that quantum mechanics is the Bayesian theory in the complex numbers.

  2. Defining Probability in Sex Offender Risk Assessment.

    PubMed

    Elwood, Richard W

    2016-12-01

    There is ongoing debate and confusion over using actuarial scales to predict individuals' risk of sexual recidivism. Much of the debate comes from not distinguishing Frequentist from Bayesian definitions of probability. Much of the confusion comes from applying Frequentist probability to individuals' risk. By definition, only Bayesian probability can be applied to the single case. The Bayesian concept of probability resolves most of the confusion and much of the debate in sex offender risk assessment. Although Bayesian probability is well accepted in risk assessment generally, it has not been widely used to assess the risk of sex offenders. I review the two concepts of probability and show how the Bayesian view alone provides a coherent scheme to conceptualize individuals' risk of sexual recidivism.

  3. Effects of Mitochondrial DNA Rate Variation on Reconstruction of Pleistocene Demographic History in a Social Avian Species, Pomatostomus superciliosus

    PubMed Central

    Norman, Janette A.; Blackmore, Caroline J.; Rourke, Meaghan; Christidis, Les

    2014-01-01

    Mitochondrial sequence data is often used to reconstruct the demographic history of Pleistocene populations in an effort to understand how species have responded to past climate change events. However, departures from neutral equilibrium conditions can confound evolutionary inference in species with structured populations or those that have experienced periods of population expansion or decline. Selection can affect patterns of mitochondrial DNA variation and variable mutation rates among mitochondrial genes can compromise inferences drawn from single markers. We investigated the contribution of these factors to patterns of mitochondrial variation and estimates of time to most recent common ancestor (TMRCA) for two clades in a co-operatively breeding avian species, the white-browed babbler Pomatostomus superciliosus. Both the protein-coding ND3 gene and hypervariable domain I control region sequences showed departures from neutral expectations within the superciliosus clade, and a two-fold difference in TMRCA estimates. Bayesian phylogenetic analysis provided evidence of departure from a strict clock model of molecular evolution in domain I, leading to an over-estimation of TMRCA for the superciliosus clade at this marker. Our results suggest mitochondrial studies that attempt to reconstruct Pleistocene demographic histories should rigorously evaluate data for departures from neutral equilibrium expectations, including variation in evolutionary rates across multiple markers. Failure to do so can lead to serious errors in the estimation of evolutionary parameters and subsequent demographic inferences concerning the role of climate as a driver of evolutionary change. These effects may be especially pronounced in species with complex social structures occupying heterogeneous environments. We propose that environmentally driven differences in social structure may explain observed differences in evolutionary rate of domain I sequences, resulting from longer than expected retention times for matriarchal lineages in the superciliosus clade. PMID:25181547

  4. HELIOS–RETRIEVAL: An Open-source, Nested Sampling Atmospheric Retrieval Code; Application to the HR 8799 Exoplanets and Inferred Constraints for Planet Formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavie, Baptiste; Mendonça, João M.; Malik, Matej

    We present an open-source retrieval code named HELIOS–RETRIEVAL, designed to obtain chemical abundances and temperature–pressure profiles by inverting the measured spectra of exoplanetary atmospheres. In our forward model, we use an exact solution of the radiative transfer equation, in the pure absorption limit, which allows us to analytically integrate over all of the outgoing rays. Two chemistry models are considered: unconstrained chemistry and equilibrium chemistry (enforced via analytical formulae). The nested sampling algorithm allows us to formally implement Occam’s Razor based on a comparison of the Bayesian evidence between models. We perform a retrieval analysis on the measured spectra ofmore » the four HR 8799 directly imaged exoplanets. Chemical equilibrium is disfavored for HR 8799b and c. We find supersolar C/H and O/H values for the outer HR 8799b and c exoplanets, while the inner HR 8799d and e exoplanets have a range of C/H and O/H values. The C/O values range from being superstellar for HR 8799b to being consistent with stellar for HR 8799c and being substellar for HR 8799d and e. If these retrieved properties are representative of the bulk compositions of the exoplanets, then they are inconsistent with formation via gravitational instability (without late-time accretion) and consistent with a core accretion scenario in which late-time accretion of ices occurred differently for the inner and outer exoplanets. For HR 8799e, we find that spectroscopy in the K band is crucial for constraining C/O and C/H. HELIOS–RETRIEVAL is publicly available as part of the Exoclimes Simulation Platform (http://www.exoclime.org).« less

  5. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  6. Bayesian Statistics for Biological Data: Pedigree Analysis

    ERIC Educational Resources Information Center

    Stanfield, William D.; Carlton, Matthew A.

    2004-01-01

    The use of Bayes' formula is applied to the biological problem of pedigree analysis to show that the Bayes' formula and non-Bayesian or "classical" methods of probability calculation give different answers. First year college students of biology can be introduced to the Bayesian statistics.

  7. Bayesian cloud detection for MERIS, AATSR, and their combination

    NASA Astrophysics Data System (ADS)

    Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.

    2014-11-01

    A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud masks were designed to be numerically efficient and suited for the processing of large amounts of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient amounts of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.

  8. Bayesian cloud detection for MERIS, AATSR, and their combination

    NASA Astrophysics Data System (ADS)

    Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.

    2015-04-01

    A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.

  9. Bayesian modeling of flexible cognitive control

    PubMed Central

    Jiang, Jiefeng; Heller, Katherine; Egner, Tobias

    2014-01-01

    “Cognitive control” describes endogenous guidance of behavior in situations where routine stimulus-response associations are suboptimal for achieving a desired goal. The computational and neural mechanisms underlying this capacity remain poorly understood. We examine recent advances stemming from the application of a Bayesian learner perspective that provides optimal prediction for control processes. In reviewing the application of Bayesian models to cognitive control, we note that an important limitation in current models is a lack of a plausible mechanism for the flexible adjustment of control over conflict levels changing at varying temporal scales. We then show that flexible cognitive control can be achieved by a Bayesian model with a volatility-driven learning mechanism that modulates dynamically the relative dependence on recent and remote experiences in its prediction of future control demand. We conclude that the emergent Bayesian perspective on computational mechanisms of cognitive control holds considerable promise, especially if future studies can identify neural substrates of the variables encoded by these models, and determine the nature (Bayesian or otherwise) of their neural implementation. PMID:24929218

  10. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors.

    PubMed

    Ojo, Oluwatobi Blessing; Lougue, Siaka; Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world's deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014.

  11. Bayesian estimation inherent in a Mexican-hat-type neural network

    NASA Astrophysics Data System (ADS)

    Takiyama, Ken

    2016-05-01

    Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.

  12. Uses and misuses of Bayes' rule and Bayesian classifiers in cybersecurity

    NASA Astrophysics Data System (ADS)

    Bard, Gregory V.

    2017-12-01

    This paper will discuss the applications of Bayes' Rule and Bayesian Classifiers in Cybersecurity. While the most elementary form of Bayes' rule occurs in undergraduate coursework, there are more complicated forms as well. As an extended example, Bayesian spam filtering is explored, and is in many ways the most triumphant accomplishment of Bayesian reasoning in computer science, as nearly everyone with an email address has a spam folder. Bayesian Classifiers have also been responsible significant cybersecurity research results; yet, because they are not part of the standard curriculum, few in the mathematics or information-technology communities have seen the exact definitions, requirements, and proofs that comprise the subject. Moreover, numerous errors have been made by researchers (described in this paper), due to some mathematical misunderstandings dealing with conditional independence, or other badly chosen assumptions. Finally, to provide instructors and researchers with real-world examples, 25 published cybersecurity papers that use Bayesian reasoning are given, with 2-4 sentence summaries of the focus and contributions of each paper.

  13. Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.

    PubMed

    Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L

    2014-11-01

    People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Bayesian randomized clinical trials: From fixed to adaptive design.

    PubMed

    Yin, Guosheng; Lam, Chi Kin; Shi, Haolun

    2017-08-01

    Randomized controlled studies are the gold standard for phase III clinical trials. Using α-spending functions to control the overall type I error rate, group sequential methods are well established and have been dominating phase III studies. Bayesian randomized design, on the other hand, can be viewed as a complement instead of competitive approach to the frequentist methods. For the fixed Bayesian design, the hypothesis testing can be cast in the posterior probability or Bayes factor framework, which has a direct link to the frequentist type I error rate. Bayesian group sequential design relies upon Bayesian decision-theoretic approaches based on backward induction, which is often computationally intensive. Compared with the frequentist approaches, Bayesian methods have several advantages. The posterior predictive probability serves as a useful and convenient tool for trial monitoring, and can be updated at any time as the data accrue during the trial. The Bayesian decision-theoretic framework possesses a direct link to the decision making in the practical setting, and can be modeled more realistically to reflect the actual cost-benefit analysis during the drug development process. Other merits include the possibility of hierarchical modeling and the use of informative priors, which would lead to a more comprehensive utilization of information from both historical and longitudinal data. From fixed to adaptive design, we focus on Bayesian randomized controlled clinical trials and make extensive comparisons with frequentist counterparts through numerical studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. A systematic review of Bayesian articles in psychology: The last 25 years.

    PubMed

    van de Schoot, Rens; Winter, Sonja D; Ryan, Oisín; Zondervan-Zwijnenburg, Mariëlle; Depaoli, Sarah

    2017-06-01

    Although the statistical tools most often used by researchers in the field of psychology over the last 25 years are based on frequentist statistics, it is often claimed that the alternative Bayesian approach to statistics is gaining in popularity. In the current article, we investigated this claim by performing the very first systematic review of Bayesian psychological articles published between 1990 and 2015 (n = 1,579). We aim to provide a thorough presentation of the role Bayesian statistics plays in psychology. This historical assessment allows us to identify trends and see how Bayesian methods have been integrated into psychological research in the context of different statistical frameworks (e.g., hypothesis testing, cognitive models, IRT, SEM, etc.). We also describe take-home messages and provide "big-picture" recommendations to the field as Bayesian statistics becomes more popular. Our review indicated that Bayesian statistics is used in a variety of contexts across subfields of psychology and related disciplines. There are many different reasons why one might choose to use Bayes (e.g., the use of priors, estimating otherwise intractable models, modeling uncertainty, etc.). We found in this review that the use of Bayes has increased and broadened in the sense that this methodology can be used in a flexible manner to tackle many different forms of questions. We hope this presentation opens the door for a larger discussion regarding the current state of Bayesian statistics, as well as future trends. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. "Whose perfection is it anyway?": a virtuous consideration of enhancement.

    PubMed

    Keenan, James F

    1999-08-01

    Discussions of genetic enhancements often imply deep suspicions about human desires to manipulate or enhance the course of our future. These unspoken assumptions about the arrogance of the quest for perfection are at odds with the normally hopeful resonancy we find in contemporary theology. The author argues that these fears, suspicions and accusations are misplaced. The problem lies not with the question of whether we should pursue perfection, but rather what perfection we are pursuing. The author argues that perfection, properly understood, has an enormously positive function in the Roman Catholic tradition. The author examines three sources: the Scriptures, the scholastic tradition, and ascetical theology. He examines contemporary criticisms of perfectionism and suggests that an adequate virtue theory keeps us from engaging perfectionism as such. The author then shows how a positive, responsible view of perfection is an asset to our discussion on enhancement technology.

  17. The Bayesian Revolution Approaches Psychological Development

    ERIC Educational Resources Information Center

    Shultz, Thomas R.

    2007-01-01

    This commentary reviews five articles that apply Bayesian ideas to psychological development, some with psychology experiments, some with computational modeling, and some with both experiments and modeling. The reviewed work extends the current Bayesian revolution into tasks often studied in children, such as causal learning and word learning, and…

  18. Covariate Balance in Bayesian Propensity Score Approaches for Observational Studies

    ERIC Educational Resources Information Center

    Chen, Jianshen; Kaplan, David

    2015-01-01

    Bayesian alternatives to frequentist propensity score approaches have recently been proposed. However, few studies have investigated their covariate balancing properties. This article compares a recently developed two-step Bayesian propensity score approach to the frequentist approach with respect to covariate balance. The effects of different…

  19. Simple equations to simulate closed-loop recycling liquid-liquid chromatography: Ideal and non-ideal recycling models.

    PubMed

    Kostanyan, Artak E

    2015-12-04

    The ideal (the column outlet is directly connected to the column inlet) and non-ideal (includes the effects of extra-column dispersion) recycling equilibrium-cell models are used to simulate closed-loop recycling counter-current chromatography (CLR CCC). Simple chromatogram equations for the individual cycles and equations describing the transport and broadening of single peaks and complex chromatograms inside the recycling closed-loop column for ideal and non-ideal recycling models are presented. The extra-column dispersion is included in the theoretical analysis, by replacing the recycling system (connecting lines, pump and valving) by a cascade of Nec perfectly mixed cells. To evaluate extra-column contribution to band broadening, two limiting regimes of recycling are analyzed: plug-flow, Nec→∞, and maximum extra-column dispersion, Nec=1. Comparative analysis of ideal and non-ideal models has shown that when the volume of the recycling system is less than one percent of the column volume, the influence of the extra-column processes on the CLR CCC separation may be neglected. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Simulated wave-driven sediment transport along the eastern coast of the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Soomere, Tarmo; Viška, Maija

    2014-01-01

    Alongshore variations in sediment transport along the eastern Baltic Sea coast from the Sambian (Samland) Peninsula up to Pärnu Bay in the Gulf of Riga are analysed using long-term (1970-2007) simulations of the nearshore wave climate and the Coastal Engineering Research Centre (CERC) wave energy flux model applied to about 5.5 km long beach sectors. The local rate of bulk transport is the largest along a short section of the Sambian Peninsula and along the north-western part of the Latvian coast. The net transport has an overall counter-clockwise nature but contains a number of local temporary reversals. The alongshore sediment flux has several divergence and convergence points. One of the divergence points at the Akmenrags Cape divides the sedimentary system of the eastern coast of the Baltic Proper into two almost completely separated compartments in the simulated wave climate. Cyclic relocation of a highly persistent convergence point over the entire Curonian Spit suggests that this landform is in almost perfect dynamical equilibrium in the simulated approximation of the contemporary wave climate.

  1. Clustered atom-replaced structure in single-crystal-like metal oxide

    NASA Astrophysics Data System (ADS)

    Araki, Takeshi; Hayashi, Mariko; Ishii, Hirotaka; Yokoe, Daisaku; Yoshida, Ryuji; Kato, Takeharu; Nishijima, Gen; Matsumoto, Akiyoshi

    2018-06-01

    By means of metal organic deposition using trifluoroacetates (TFA-MOD), we replaced and localized two or more atoms in a single-crystalline structure having almost perfect orientation. Thus, we created a new functional structure, namely, clustered atom-replaced structure (CARS), having single-crystal-like metal oxide. We replaced metals in the oxide with Sm and Lu and localized them. Energy dispersive x-ray spectroscopy results, where the Sm signal increases with the Lu signal in the single-crystalline structure, confirm evidence of CARS. We also form other CARS with three additional metals, including Pr. The valence number of Pr might change from 3+ to approximately 4+, thereby reducing the Pr–Ba distance. We directly observed the structure by a high-angle annular dark-field image, which provided further evidence of CARS. The key to establishing CARS is an equilibrium chemical reaction and a combination of additional larger and smaller unit cells to matrix cells. We made a new functional metal oxide with CARS and expect to realize CARS in other metal oxide structures in the future by using the above-mentioned process.

  2. Analysis of structure and dynamics of superfine polyhydroxybutyrate fibers for targeted drug delivery

    NASA Astrophysics Data System (ADS)

    Olkhov, A.; Kucherenko, E.; Pantyukhov, P.; Zykova, A.; Karpova, S.; Iordanskii, A.

    2017-02-01

    Creation of polymer matrix systems for targeted drug delivery into a living organism is a challenging problem of modern treatment of various diseases and injuries. Poly-3-hydroxybutyrate (PHB) is commonly used for development of therapeutic systems. The aim of this article is to examine the changes in structure and morphology of fibers in presence of dipyridamole (DPD) as model drug for controlled release. It was found that addition of dipyridamole led to disappearance of spindle-shaped nodules on fibers of PHB in comparison with pure PHB. The research of thermophysical parameters showed that specific melting enthalpy (and the degree of crystallinity) of PHB fibers increased with the addition of DPD. With the increasing of DPD content in PHB fibers, more perfect and equilibrium crystal structure was formed. According to analysis of intercrystalline regions of PHB fibers, it was found that as the crystallinity of PHB in intergranular regions rose, the corresponding decrease of radical rotation speed was observed. It was concluded that fibers of PHB can be used for creating therapeutic systems for targeted and prolonged drug delivery.

  3. Fermionic halos at finite temperature in AdS/CFT

    NASA Astrophysics Data System (ADS)

    Argüelles, Carlos R.; Grandi, Nicolás E.

    2018-05-01

    We explore the gravitational backreaction of a system consisting in a very large number of elementary fermions at finite temperature, in asymptotically AdS space. We work in the hydrodynamic approximation, and solve the Tolman-Oppenheimer-Volkoff equations with a perfect fluid whose equation of state takes into account both the relativistic effects of the fermionic constituents, as well as its finite temperature effects. We find a novel dense core-diluted halo structure for the density profiles in the AdS bulk, similarly as recently reported in flat space, for the case of astrophysical dark matter halos in galaxies. We further study the critical equilibrium configurations above which the core undergoes gravitational collapse towards a massive black hole, and calculate the corresponding critical central temperatures, for two qualitatively different central regimes of the fermions: the diluted-Fermi case, and the degenerate case. As a probe for the dual CFT, we construct the holographic two-point correlator of a scalar operator with large conformal dimension in the worldline limit, and briefly discuss on the boundary CFT effects at the critical points.

  4. Self-organization: the fundament of cell biology

    PubMed Central

    Betz, Timo

    2018-01-01

    Self-organization refers to the emergence of an overall order in time and space of a given system that results from the collective interactions of its individual components. This concept has been widely recognized as a core principle in pattern formation for multi-component systems of the physical, chemical and biological world. It can be distinguished from self-assembly by the constant input of energy required to maintain order—and self-organization therefore typically occurs in non-equilibrium or dissipative systems. Cells, with their constant energy consumption and myriads of local interactions between distinct proteins, lipids, carbohydrates and nucleic acids, represent the perfect playground for self-organization. It therefore comes as no surprise that many properties and features of self-organized systems, such as spontaneous formation of patterns, nonlinear coupling of reactions, bi-stable switches, waves and oscillations, are found in all aspects of modern cell biology. Ultimately, self-organization lies at the heart of the robustness and adaptability found in cellular and organismal organization, and hence constitutes a fundamental basis for natural selection and evolution. This article is part of the theme issue ‘Self-organization in cell biology’. PMID:29632257

  5. High-speed reacting flow simulation using USA-series codes

    NASA Astrophysics Data System (ADS)

    Chakravarthy, S. R.; Palaniswamy, S.

    In this paper, the finite-rate chemistry (FRC) formulation for the USA-series of codes and three sets of validations are presented. USA-series computational fluid dynamics (CFD) codes are based on Unified Solution Algorithms including explicity and implicit formulations, factorization and relaxation approaches, time marching and space marching methodolgies, etc., in order to be able to solve a very wide class of CDF problems using a single framework. Euler or Navier-Stokes equations are solved using a finite-volume treatment with upwind Total Variation Diminishing discretization for the inviscid terms. Perfect and real gas options are available including equilibrium and nonequilibrium chemistry. This capability has been widely used to study various problems including Space Shuttle exhaust plumes, National Aerospace Plane (NASP) designs, etc. (1) Numerical solutions are presented showing the full range of possible solutions to steady detonation wave problems. (2) Comparison between the solution obtained by the USA code and Generalized Kinetics Analysis Program (GKAP) is shown for supersonic combustion in a duct. (3) Simulation of combustion in a supersonic shear layer is shown to have reasonable agreement with experimental observations.

  6. Mesoscale simulation of the formation and dynamics of lipid-structured poly(ethylene oxide)-block-poly(methyl methacrylate) diblock copolymers.

    PubMed

    Mu, Dan; Li, Jian-Quan; Feng, Sheng-Yu

    2015-05-21

    Twelve poly(ethylene oxide)-block-poly(methyl methacrylate) (PEO-b-PMMA) copolymers with lipid-like structures were designed and investigated by MesoDyn simulation. Spherical and worm-like micelles as well as bicontinuous, lamellar and defected lamellar phases were obtained. A special structure, designated B2412, with two lipid structures connected by their heads, was found to undergo four stages prior to forming a spherical micelle phase. Two possible assembly mechanisms were found via thermodynamic and dynamic process analyses; namely, the fusion and fission of micelles in dynamic equilibrium during the adjustment stage. Water can be encapsulated into these micelles, which can affect their size, particularly in low concentration aqueous solutions. The assignment of weak negative charges to the hydrophilic EO blocks resulted in a clear effect on micelle size. Surprisingly, the largest effect was observed with EO blocks with -0.5 e, wherein an ordered perfect hexagonal phase was formed. The obtained results can be applied in numerous fields of study, including adsorption, catalysis, controlled release and drug delivery.

  7. The Concepts of Hope and Fear in the Islamic Thought: Implications for Spiritual Health.

    PubMed

    Bahmani, Fatemeh; Amini, Mitra; Tabei, Seyed Ziaeddin; Abbasi, Mohamad Bagher

    2018-02-01

    The Holy Qur'ān and medieval Islamic writings have many references to "hope" (rajā) and "fear" (khawf) as both single and paired concepts. However, a comprehensive analytical study on these two notions from an Islamic point of view still seems lacking. Both paper and electronic documents related to Islamic and Qur'ānic literature are being used in this study. Also Web resources are searched for keywords of fear, hope and Islam in three languages of Arabic, English and Persian, including Tanzil.net, Almaany.com, Tebyan.net, Holyquran.net, Noorlib.ir, Hawzah.net and Google Scholar. Findings indicate that hope and fear are comprised of three conceptual elements: emotional, cognitive and behavioral, and are identified as "praiseworthy" hope or fear, when associated with God as the ultimate object. Nonetheless, this praiseworthy hope or fear is only distinguishable as "true," when both are in equilibrium, a necessary condition for spiritual health, which results to perfection. Islam rejects excessive hope or excessive fear, describing both as a "pseudo"-type, which would respectively contribute to self-deceit and despair, and end in spiritual decline.

  8. Tailoring highly conductive graphene nanoribbons from small polycyclic aromatic hydrocarbons: a computational study.

    PubMed

    Bilić, A; Sanvito, S

    2013-07-10

    Pyrene, the smallest two-dimensional mesh of aromatic rings, with various terminal thiol substitutions, has been considered as a potential molecular interconnect. Charge transport through two terminal devices has been modeled using density functional theory (with and without self interaction correction) and the non-equilibrium Green's function method. A tetra-substituted pyrene, with dual thiol terminal groups at opposite ends, has been identified as an excellent candidate, owing to its high conductance, virtually independent of bias voltage. The two possible extensions of its motif generate two series of graphene nanoribbons, with zigzag and armchair edges and with semimetallic and semiconducting electron band structure, respectively. The effects related to the wire length and the bias voltage on the charge transport have been investigated for both sets. The conductance of the nanoribbons with a zigzag edge does not show either length or voltage dependence, owing to an almost perfect electron transmission with a continuum of conducting channels. In contrast, for the armchair nanoribbons a slow exponential attenuation of the conductance with the length has been found, due to their semiconducting nature.

  9. Bayesian Analysis of Nonlinear Structural Equation Models with Nonignorable Missing Data

    ERIC Educational Resources Information Center

    Lee, Sik-Yum

    2006-01-01

    A Bayesian approach is developed for analyzing nonlinear structural equation models with nonignorable missing data. The nonignorable missingness mechanism is specified by a logistic regression model. A hybrid algorithm that combines the Gibbs sampler and the Metropolis-Hastings algorithm is used to produce the joint Bayesian estimates of…

  10. What Is the Probability You Are a Bayesian?

    ERIC Educational Resources Information Center

    Wulff, Shaun S.; Robinson, Timothy J.

    2014-01-01

    Bayesian methodology continues to be widely used in statistical applications. As a result, it is increasingly important to introduce students to Bayesian thinking at early stages in their mathematics and statistics education. While many students in upper level probability courses can recite the differences in the Frequentist and Bayesian…

  11. Using Bayesian Networks to Improve Knowledge Assessment

    ERIC Educational Resources Information Center

    Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra

    2013-01-01

    In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…

  12. Bayesian Decision Theoretical Framework for Clustering

    ERIC Educational Resources Information Center

    Chen, Mo

    2011-01-01

    In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…

  13. Using Bayesian belief networks in adaptive management.

    Treesearch

    J.B. Nyberg; B.G. Marcot; R. Sulyma

    2006-01-01

    Bayesian belief and decision networks are relatively new modeling methods that are especially well suited to adaptive-management applications, but they appear not to have been widely used in adaptive management to date. Bayesian belief networks (BBNs) can serve many purposes for practioners of adaptive management, from illustrating system relations conceptually to...

  14. Using Alien Coins to Test Whether Simple Inference Is Bayesian

    ERIC Educational Resources Information Center

    Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.

    2016-01-01

    Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…

  15. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood

    ERIC Educational Resources Information Center

    Karabatsos, George

    2017-01-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…

  16. Searching Algorithm Using Bayesian Updates

    ERIC Educational Resources Information Center

    Caudle, Kyle

    2010-01-01

    In late October 1967, the USS Scorpion was lost at sea, somewhere between the Azores and Norfolk Virginia. Dr. Craven of the U.S. Navy's Special Projects Division is credited with using Bayesian Search Theory to locate the submarine. Bayesian Search Theory is a straightforward and interesting application of Bayes' theorem which involves searching…

  17. The Application of Bayesian Analysis to Issues in Developmental Research

    ERIC Educational Resources Information Center

    Walker, Lawrence J.; Gustafson, Paul; Frimer, Jeremy A.

    2007-01-01

    This article reviews the concepts and methods of Bayesian statistical analysis, which can offer innovative and powerful solutions to some challenging analytical problems that characterize developmental research. In this article, we demonstrate the utility of Bayesian analysis, explain its unique adeptness in some circumstances, address some…

  18. Dynamic Bayesian Network Modeling of Game Based Diagnostic Assessments. CRESST Report 837

    ERIC Educational Resources Information Center

    Levy, Roy

    2014-01-01

    Digital games offer an appealing environment for assessing student proficiencies, including skills and misconceptions in a diagnostic setting. This paper proposes a dynamic Bayesian network modeling approach for observations of student performance from an educational video game. A Bayesian approach to model construction, calibration, and use in…

  19. Bayesian Data-Model Fit Assessment for Structural Equation Modeling

    ERIC Educational Resources Information Center

    Levy, Roy

    2011-01-01

    Bayesian approaches to modeling are receiving an increasing amount of attention in the areas of model construction and estimation in factor analysis, structural equation modeling (SEM), and related latent variable models. However, model diagnostics and model criticism remain relatively understudied aspects of Bayesian SEM. This article describes…

  20. Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations

    ERIC Educational Resources Information Center

    Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon

    2018-01-01

    To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…

  1. Modeling Diagnostic Assessments with Bayesian Networks

    ERIC Educational Resources Information Center

    Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego

    2007-01-01

    This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…

  2. Ockham's razor and Bayesian analysis. [statistical theory for systems evaluation

    NASA Technical Reports Server (NTRS)

    Jefferys, William H.; Berger, James O.

    1992-01-01

    'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.

  3. Technical Topic 3.2.2.d Bayesian and Non-Parametric Statistics: Integration of Neural Networks with Bayesian Networks for Data Fusion and Predictive Modeling

    DTIC Science & Technology

    2016-05-31

    and included explosives such as TATP, HMTD, RDX, RDX, ammonium nitrate , potassium perchlorate, potassium nitrate , sugar, and TNT. The approach...Distribution Unlimited UU UU UU UU 31-05-2016 15-Apr-2014 14-Jan-2015 Final Report: Technical Topic 3.2.2. d Bayesian and Non- parametric Statistics...of Papers published in non peer-reviewed journals: Final Report: Technical Topic 3.2.2. d Bayesian and Non-parametric Statistics: Integration of Neural

  4. Editorial: Bayesian benefits for child psychology and psychiatry researchers.

    PubMed

    Oldehinkel, Albertine J

    2016-09-01

    For many scientists, performing statistical tests has become an almost automated routine. However, p-values are frequently used and interpreted incorrectly; and even when used appropriately, p-values tend to provide answers that do not match researchers' questions and hypotheses well. Bayesian statistics present an elegant and often more suitable alternative. The Bayesian approach has rarely been applied in child psychology and psychiatry research so far, but the development of user-friendly software packages and tutorials has placed it well within reach now. Because Bayesian analyses require a more refined definition of hypothesized probabilities of possible outcomes than the classical approach, going Bayesian may offer the additional benefit of sparkling the development and refinement of theoretical models in our field. © 2016 Association for Child and Adolescent Mental Health.

  5. Quantum-Like Representation of Non-Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.

    2013-01-01

    This research is related to the problem of "irrational decision making or inference" that have been discussed in cognitive psychology. There are some experimental studies, and these statistical data cannot be described by classical probability theory. The process of decision making generating these data cannot be reduced to the classical Bayesian inference. For this problem, a number of quantum-like coginitive models of decision making was proposed. Our previous work represented in a natural way the classical Bayesian inference in the frame work of quantum mechanics. By using this representation, in this paper, we try to discuss the non-Bayesian (irrational) inference that is biased by effects like the quantum interference. Further, we describe "psychological factor" disturbing "rationality" as an "environment" correlating with the "main system" of usual Bayesian inference.

  6. Evidence reasoning method for constructing conditional probability tables in a Bayesian network of multimorbidity.

    PubMed

    Du, Yuanwei; Guo, Yubin

    2015-01-01

    The intrinsic mechanism of multimorbidity is difficult to recognize and prediction and diagnosis are difficult to carry out accordingly. Bayesian networks can help to diagnose multimorbidity in health care, but it is difficult to obtain the conditional probability table (CPT) because of the lack of clinically statistical data. Today, expert knowledge and experience are increasingly used in training Bayesian networks in order to help predict or diagnose diseases, but the CPT in Bayesian networks is usually irrational or ineffective for ignoring realistic constraints especially in multimorbidity. In order to solve these problems, an evidence reasoning (ER) approach is employed to extract and fuse inference data from experts using a belief distribution and recursive ER algorithm, based on which evidence reasoning method for constructing conditional probability tables in Bayesian network of multimorbidity is presented step by step. A multimorbidity numerical example is used to demonstrate the method and prove its feasibility and application. Bayesian network can be determined as long as the inference assessment is inferred by each expert according to his/her knowledge or experience. Our method is more effective than existing methods for extracting expert inference data accurately and is fused effectively for constructing CPTs in a Bayesian network of multimorbidity.

  7. Bayesian techniques for analyzing group differences in the Iowa Gambling Task: A case study of intuitive and deliberate decision-makers.

    PubMed

    Steingroever, Helen; Pachur, Thorsten; Šmíra, Martin; Lee, Michael D

    2018-06-01

    The Iowa Gambling Task (IGT) is one of the most popular experimental paradigms for comparing complex decision-making across groups. Most commonly, IGT behavior is analyzed using frequentist tests to compare performance across groups, and to compare inferred parameters of cognitive models developed for the IGT. Here, we present a Bayesian alternative based on Bayesian repeated-measures ANOVA for comparing performance, and a suite of three complementary model-based methods for assessing the cognitive processes underlying IGT performance. The three model-based methods involve Bayesian hierarchical parameter estimation, Bayes factor model comparison, and Bayesian latent-mixture modeling. We illustrate these Bayesian methods by applying them to test the extent to which differences in intuitive versus deliberate decision style are associated with differences in IGT performance. The results show that intuitive and deliberate decision-makers behave similarly on the IGT, and the modeling analyses consistently suggest that both groups of decision-makers rely on similar cognitive processes. Our results challenge the notion that individual differences in intuitive and deliberate decision styles have a broad impact on decision-making. They also highlight the advantages of Bayesian methods, especially their ability to quantify evidence in favor of the null hypothesis, and that they allow model-based analyses to incorporate hierarchical and latent-mixture structures.

  8. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    PubMed

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  9. Using Bayesian Adaptive Trial Designs for Comparative Effectiveness Research: A Virtual Trial Execution.

    PubMed

    Luce, Bryan R; Connor, Jason T; Broglio, Kristine R; Mullins, C Daniel; Ishak, K Jack; Saunders, Elijah; Davis, Barry R

    2016-09-20

    Bayesian and adaptive clinical trial designs offer the potential for more efficient processes that result in lower sample sizes and shorter trial durations than traditional designs. To explore the use and potential benefits of Bayesian adaptive clinical trial designs in comparative effectiveness research. Virtual execution of ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) as if it had been done according to a Bayesian adaptive trial design. Comparative effectiveness trial of antihypertensive medications. Patient data sampled from the more than 42 000 patients enrolled in ALLHAT with publicly available data. Number of patients randomly assigned between groups, trial duration, observed numbers of events, and overall trial results and conclusions. The Bayesian adaptive approach and original design yielded similar overall trial conclusions. The Bayesian adaptive trial randomly assigned more patients to the better-performing group and would probably have ended slightly earlier. This virtual trial execution required limited resampling of ALLHAT patients for inclusion in RE-ADAPT (REsearch in ADAptive methods for Pragmatic Trials). Involvement of a data monitoring committee and other trial logistics were not considered. In a comparative effectiveness research trial, Bayesian adaptive trial designs are a feasible approach and potentially generate earlier results and allocate more patients to better-performing groups. National Heart, Lung, and Blood Institute.

  10. Understanding overpressure in the FAA aerosol can test by C3H2F3Br (2-BTP)✩

    PubMed Central

    Linteris, Gregory Thomas; Babushok, Valeri Ivan; Pagliaro, John Leonard; Burgess, Donald Raymond; Manion, Jeffrey Alan; Takahashi, Fumiaki; Katta, Viswanath Reddy; Baker, Patrick Thomas

    2018-01-01

    Thermodynamic equilibrium calculations, as well as perfectly-stirred reactor (PSR) simulations with detailed reaction kinetics, are performed for a potential halon replacement, C3H2F3Br (2-BTP, C3H2F3Br, 2-Bromo-3,3,3-trifluoropropene), to understand the reasons for the unexpected enhanced combustion rather than suppression in a mandated FAA test. The high pressure rise with added agent is shown to depend on the amount of agent, and is well-predicted by an equilibrium model corresponding to stoichiometric reaction of fuel, oxygen, and agent. A kinetic model for the reaction of C3H2F3Br in hydrocarbon-air flames has been applied to understand differences in the chemical suppression behavior of C3H2F3Br vs. CF3Br in the FAA test. Stirred-reactor simulations predict that in the conditions of the FAA test, the inhibition effectiveness of C3H2F3Br at high agent loadings is relatively insensitive to the overall stoichiometry (for fuel-lean conditions), and the marginal inhibitory effect of the agent is greatly reduced, so that the mixture remains flammable over a wide range of conditions. Most important, the flammability of the agent-air mixtures themselves (when compressively preheated), can support low-strain flames which are much more difficult to extinguish than the easy-to extinguish, high-strain primary fireball from the impulsively released fuel mixture. Hence, the exothermic reaction of halogenated hydrocarbons in air should be considered in other situations with strong ignition sources and low strain flows, especially at preheated conditions. PMID:29628525

  11. Nonequilibrium viscosity of glass

    NASA Astrophysics Data System (ADS)

    Mauro, John C.; Allan, Douglas C.; Potuzak, Marcel

    2009-09-01

    Since glass is a nonequilibrium material, its properties depend on both composition and thermal history. While most prior studies have focused on equilibrium liquid viscosity, an accurate description of nonequilibrium viscosity is essential for understanding the low temperature dynamics of glass. Departure from equilibrium occurs as a glass-forming system is cooled through the glass transition range. The glass transition involves a continuous breakdown of ergodicity as the system gradually becomes trapped in a subset of the available configurational phase space. At very low temperatures a glass is perfectly nonergodic (or “isostructural”), and the viscosity is described well by an Arrhenius form. However, the behavior of viscosity during the glass transition range itself is not yet understood. In this paper, we address the problem of glass viscosity using the enthalpy landscape model of Mauro and Loucks [Phys. Rev. B 76, 174202 (2007)] for selenium, an elemental glass former. To study a wide range of thermal histories, we compute nonequilibrium viscosity with cooling rates from 10-12 to 1012K/s . Based on these detailed landscape calculations, we propose a simplified phenomenological model capturing the essential physics of glass viscosity. The phenomenological model incorporates an ergodicity parameter that accounts for the continuous breakdown of ergodicity at the glass transition. We show a direct relationship between the nonequilibrium viscosity parameters and the fragility of the supercooled liquid. The nonequilibrium viscosity model is validated against experimental measurements of Corning EAGLE XG™ glass. The measurements are performed using a specially designed beam-bending apparatus capable of accurate nonequilibrium viscosity measurements up to 1016Pas . Using a common set of parameters, the phenomenological model provides an accurate description of EAGLE XG™ viscosity over the full range of measured temperatures and fictive temperatures.

  12. The intrinsic periodic fluctuation of forest: a theoretical model based on diffusion equation

    NASA Astrophysics Data System (ADS)

    Zhou, J.; Lin, G., Sr.

    2015-12-01

    Most forest dynamic models predict the stable state of size structure as well as the total basal area and biomass in mature forest, the variation of forest stands are mainly driven by environmental factors after the equilibrium has been reached. However, although the predicted power-law size-frequency distribution does exist in analysis of many forest inventory data sets, the estimated distribution exponents are always shifting between -2 and -4, and has a positive correlation with the mean value of DBH. This regular pattern can not be explained by the effects of stochastic disturbances on forest stands. Here, we adopted the partial differential equation (PDE) approach to deduce the systematic behavior of an ideal forest, by solving the diffusion equation under the restricted condition of invariable resource occupation, a periodic solution was gotten to meet the variable performance of forest size structure while the former models with stable performance were just a special case of the periodic solution when the fluctuation frequency equals zero. In our results, the number of individuals in each size class was the function of individual growth rate(G), mortality(M), size(D) and time(T), by borrowing the conclusion of allometric theory on these parameters, the results perfectly reflected the observed "exponent-mean DBH" relationship and also gave a logically complete description to the time varying form of forest size-frequency distribution. Our model implies that the total biomass of a forest can never reach a stable equilibrium state even in the absence of disturbances and climate regime shift, we propose the idea of intrinsic fluctuation property of forest and hope to provide a new perspective on forest dynamics and carbon cycle research.

  13. Characterization of 32 microsatellite loci for the Pacific red snapper, Lutjanus peru, through next generation sequencing.

    PubMed

    Paz-García, David A; Munguía-Vega, Adrián; Plomozo-Lugo, Tomas; Weaver, Amy Hudson

    2017-04-01

    We developed a set of hypervariable microsatellite markers for the Pacific red snapper (Lutjanus peru), an economically important marine fish for small-scale fisheries in the west coast of Mexico. We performed shotgun genome sequencing with the 454 XL titanium chemistry and used bioinformatic tools to search for perfect microsatellite loci. We selected 66 primer pairs that were synthesized and genotyped in an ABI PRISM 3730XL DNA sequencer in 32 individuals from the Gulf of California. We estimated levels of genetic diversity, deviations from linkage and Hardy-Weinberg equilibrium, estimated the frequency of null alleles and the probability of individual identity for the new markers. We reanalyzed 16 loci in 16 individuals to estimate genotyping error rates. Eighteen loci failed to amplify, 16 loci were discarded due to unspecific amplifications and 32 loci (14 tetranucleotide and 18 dinucleotide) were successfully scored. The average number of alleles per locus was 21 (±6.87, SD) and ranged from 8 to 34. The average observed and expected heterozygosities were 0.787 (±0.144 SD, range 0.250-0.935) and 0.909 (±0.122 SD, range 0.381-0.965), respectively. No significant linkage was detected. Eight loci showed deviations from Hardy-Weinberg equilibrium, and from these, four loci showed moderate null allele frequencies (0.104-0.220). The probability of individual identity for the new loci was 1.46 -62 . Genotyping error rates averaged 9.58%. The new markers will be useful to investigate patterns of larval dispersal, metapopulation dynamics, fine-scale genetic structure and diversity aimed to inform the implementation of spatially explicit fisheries management strategies in the Gulf of California.

  14. Comparison among perfect-C®, zero-P®, and plates with a cage in single-level cervical degenerative disc disease.

    PubMed

    Noh, Sung Hyun; Zhang, Ho Yeol

    2018-01-25

    We intended to analyze the efficacy of a new integrated cage and plate device called Perfect-C for anterior cervical discectomy and fusion (ACDF) to cure single-level cervical degenerative disc disease. We enrolled 148 patients who were subjected to single-level ACDF with one of the following three surgical devices: a Perfect-C implant (41 patients), a Zero-P implant (36 patients), or a titanium plate with a polyetheretherketone (PEEK) cage (71 patients). We conducted a retrospective study to compare the clinical and radiological results among the three groups. The length of the operation, intraoperative blood loss, and duration of hospitalization were significantly lower in the Perfect-C group than in the Zero-P and plate-with-cage groups (P < 0.05). At the last follow-up visit, heterotopic ossification (HO) was not observed in any cases (0%) in the Perfect-C and Zero-P groups but was noted in 21 cases (30%) in the plate-with-cage group. The cephalad and caudal plate-to-disc distance (PDD) and the cephalad and caudal PDD/anterior body height (ABH) were significantly greater in the Perfect-C and Zero-P groups than in the plate-with-cage group (P < 0.05). Subsidence occurred in five cases (14%) in the Perfect-C group, in nine cases (25%) in the Zero-P group, and in 15 cases (21%) in the plate-with-cage group. Fusion occurred in 37 cases (90%) in the Perfect-C group, in 31 cases (86%) in the Zero-P group, and in 68 cases (95%) in the plate-with-cage group. The Perfect-C, Zero-P, and plate-with-cage devices are effective for treating single-level cervical degenerative disc disease. However, the Perfect-C implant has many advantages over both the Zero-P implant and conventional plate-cage treatments. The Perfect-C implant was associated with shorter operation times and hospitalization durations, less blood loss, and lower subsidence rates compared with the Zero-P implant or the titanium plate with a PEEK cage.

  15. PERFECTED enhanced recovery (PERFECT-ER) care versus standard acute care for patients admitted to acute settings with hip fracture identified as experiencing confusion: study protocol for a feasibility cluster randomized controlled trial.

    PubMed

    Hammond, Simon P; Cross, Jane L; Shepstone, Lee; Backhouse, Tamara; Henderson, Catherine; Poland, Fiona; Sims, Erika; MacLullich, Alasdair; Penhale, Bridget; Howard, Robert; Lambert, Nigel; Varley, Anna; Smith, Toby O; Sahota, Opinder; Donell, Simon; Patel, Martyn; Ballard, Clive; Young, John; Knapp, Martin; Jackson, Stephen; Waring, Justin; Leavey, Nick; Howard, Gregory; Fox, Chris

    2017-12-04

    Health and social care provision for an ageing population is a global priority. Provision for those with dementia and hip fracture has specific and growing importance. Older people who break their hip are recognised as exceptionally vulnerable to experiencing confusion (including but not exclusively, dementia and/or delirium and/or cognitive impairment(s)) before, during or after acute admissions. Older people experiencing hip fracture and confusion risk serious complications, linked to delayed recovery and higher mortality post-operatively. Specific care pathways acknowledging the differences in patient presentation and care needs are proposed to improve clinical and process outcomes. This protocol describes a multi-centre, feasibility, cluster-randomised, controlled trial (CRCT) to be undertaken across ten National Health Service hospital trusts in the UK. The trial will explore the feasibility of undertaking a CRCT comparing the multicomponent PERFECTED enhanced recovery intervention (PERFECT-ER), which acknowledges the differences in care needs of confused older patients experiencing hip fracture, with standard care. The trial will also have an integrated process evaluation to explore how PERFECT-ER is implemented and interacts with the local context. The study will recruit 400 hip fracture patients identified as experiencing confusion and will also recruit "suitable informants" (individuals in regular contact with participants who will complete proxy measures). We will also recruit NHS professionals for the process evaluation. This mixed methods design will produce data to inform a definitive evaluation of the intervention via a large-scale pragmatic randomised controlled trial (RCT). The trial will provide a preliminary estimate of potential efficacy of PERFECT-ER versus standard care; assess service delivery variation, inform primary and secondary outcome selection, generate estimates of recruitment and retention rates, data collection difficulties, and completeness of outcome data and provide an indication of potential economic benefits. The process evaluation will enhance knowledge of implementation delivery and receipt. ISRCTN, 99336264 . Registered on 5 September 2016.

  16. Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why

    PubMed Central

    Brase, Gary L.; Hill, W. Trey

    2015-01-01

    Bayesian reasoning, defined here as the updating of a posterior probability following new information, has historically been problematic for humans. Classic psychology experiments have tested human Bayesian reasoning through the use of word problems and have evaluated each participant’s performance against the normatively correct answer provided by Bayes’ theorem. The standard finding is of generally poor performance. Over the past two decades, though, progress has been made on how to improve Bayesian reasoning. Most notably, research has demonstrated that the use of frequencies in a natural sampling framework—as opposed to single-event probabilities—can improve participants’ Bayesian estimates. Furthermore, pictorial aids and certain individual difference factors also can play significant roles in Bayesian reasoning success. The mechanics of how to build tasks which show these improvements is not under much debate. The explanations for why naturally sampled frequencies and pictures help Bayesian reasoning remain hotly contested, however, with many researchers falling into ingrained “camps” organized around two dominant theoretical perspectives. The present paper evaluates the merits of these theoretical perspectives, including the weight of empirical evidence, theoretical coherence, and predictive power. By these criteria, the ecological rationality approach is clearly better than the heuristics and biases view. Progress in the study of Bayesian reasoning will depend on continued research that honestly, vigorously, and consistently engages across these different theoretical accounts rather than staying “siloed” within one particular perspective. The process of science requires an understanding of competing points of view, with the ultimate goal being integration. PMID:25873904

  17. Using Bayesian analysis in repeated preclinical in vivo studies for a more effective use of animals.

    PubMed

    Walley, Rosalind; Sherington, John; Rastrick, Joe; Detrait, Eric; Hanon, Etienne; Watt, Gillian

    2016-05-01

    Whilst innovative Bayesian approaches are increasingly used in clinical studies, in the preclinical area Bayesian methods appear to be rarely used in the reporting of pharmacology data. This is particularly surprising in the context of regularly repeated in vivo studies where there is a considerable amount of data from historical control groups, which has potential value. This paper describes our experience with introducing Bayesian analysis for such studies using a Bayesian meta-analytic predictive approach. This leads naturally either to an informative prior for a control group as part of a full Bayesian analysis of the next study or using a predictive distribution to replace a control group entirely. We use quality control charts to illustrate study-to-study variation to the scientists and describe informative priors in terms of their approximate effective numbers of animals. We describe two case studies of animal models: the lipopolysaccharide-induced cytokine release model used in inflammation and the novel object recognition model used to screen cognitive enhancers, both of which show the advantage of a Bayesian approach over the standard frequentist analysis. We conclude that using Bayesian methods in stable repeated in vivo studies can result in a more effective use of animals, either by reducing the total number of animals used or by increasing the precision of key treatment differences. This will lead to clearer results and supports the "3Rs initiative" to Refine, Reduce and Replace animals in research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Good fences make for good neighbors but bad science: a review of what improves Bayesian reasoning and why.

    PubMed

    Brase, Gary L; Hill, W Trey

    2015-01-01

    Bayesian reasoning, defined here as the updating of a posterior probability following new information, has historically been problematic for humans. Classic psychology experiments have tested human Bayesian reasoning through the use of word problems and have evaluated each participant's performance against the normatively correct answer provided by Bayes' theorem. The standard finding is of generally poor performance. Over the past two decades, though, progress has been made on how to improve Bayesian reasoning. Most notably, research has demonstrated that the use of frequencies in a natural sampling framework-as opposed to single-event probabilities-can improve participants' Bayesian estimates. Furthermore, pictorial aids and certain individual difference factors also can play significant roles in Bayesian reasoning success. The mechanics of how to build tasks which show these improvements is not under much debate. The explanations for why naturally sampled frequencies and pictures help Bayesian reasoning remain hotly contested, however, with many researchers falling into ingrained "camps" organized around two dominant theoretical perspectives. The present paper evaluates the merits of these theoretical perspectives, including the weight of empirical evidence, theoretical coherence, and predictive power. By these criteria, the ecological rationality approach is clearly better than the heuristics and biases view. Progress in the study of Bayesian reasoning will depend on continued research that honestly, vigorously, and consistently engages across these different theoretical accounts rather than staying "siloed" within one particular perspective. The process of science requires an understanding of competing points of view, with the ultimate goal being integration.

  19. A Bayesian pick-the-winner design in a randomized phase II clinical trial.

    PubMed

    Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E

    2017-10-24

    Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.

  20. A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction

    PubMed Central

    Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R.; Buenrostro-Mariscal, Raymundo

    2017-01-01

    There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. PMID:28391241

  1. Bayesian estimates of the incidence of rare cancers in Europe.

    PubMed

    Botta, Laura; Capocaccia, Riccardo; Trama, Annalisa; Herrmann, Christian; Salmerón, Diego; De Angelis, Roberta; Mallone, Sandra; Bidoli, Ettore; Marcos-Gragera, Rafael; Dudek-Godeau, Dorota; Gatta, Gemma; Cleries, Ramon

    2018-04-21

    The RARECAREnet project has updated the estimates of the burden of the 198 rare cancers in each European country. Suspecting that scant data could affect the reliability of statistical analysis, we employed a Bayesian approach to estimate the incidence of these cancers. We analyzed about 2,000,000 rare cancers diagnosed in 2000-2007 provided by 83 population-based cancer registries from 27 European countries. We considered European incidence rates (IRs), calculated over all the data available in RARECAREnet, as a valid a priori to merge with country-specific observed data. Therefore we provided (1) Bayesian estimates of IRs and the yearly numbers of cases of rare cancers in each country; (2) the expected time (T) in years needed to observe one new case; and (3) practical criteria to decide when to use the Bayesian approach. Bayesian and classical estimates did not differ much; substantial differences (>10%) ranged from 77 rare cancers in Iceland to 14 in England. The smaller the population the larger the number of rare cancers needing a Bayesian approach. Bayesian estimates were useful for cancers with fewer than 150 observed cases in a country during the study period; this occurred mostly when the population of the country is small. For the first time the Bayesian estimates of IRs and the yearly expected numbers of cases for each rare cancer in each individual European country were calculated. Moreover, the indicator T is useful to convey incidence estimates for exceptionally rare cancers and in small countries; it far exceeds the professional lifespan of a medical doctor. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction.

    PubMed

    Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R; Buenrostro-Mariscal, Raymundo

    2017-06-07

    There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. Copyright © 2017 Montesinos-López et al.

  3. Generation and characterization of a perfect vortex beam with a large topological charge through a digital micromirror device.

    PubMed

    Chen, Yue; Fang, Zhao-Xiang; Ren, Yu-Xuan; Gong, Lei; Lu, Rong-De

    2015-09-20

    Optical vortices are associated with a spatial phase singularity. Such a beam with a vortex is valuable in optical microscopy, hyper-entanglement, and optical levitation. In these applications, vortex beams with a perfect circle shape and a large topological charge are highly desirable. But the generation of perfect vortices with high topological charges is challenging. We present a novel method to create perfect vortex beams with large topological charges using a digital micromirror device (DMD) through binary amplitude modulation and a narrow Gaussian approximation. The DMD with binary holograms encoding both the spatial amplitude and the phase could generate fast switchable, reconfigurable optical vortex beams with significantly high quality and fidelity. With either the binary Lee hologram or the superpixel binary encoding technique, we were able to generate the corresponding hologram with high fidelity and create a perfect vortex with topological charge as large as 90. The physical properties of the perfect vortex beam produced were characterized through measurements of propagation dynamics and the focusing fields. The measurements show good consistency with the theoretical simulation. The perfect vortex beam produced satisfies high-demand utilization in optical manipulation and control, momentum transfer, quantum computing, and biophotonics.

  4. Coherent perfect rotation

    NASA Astrophysics Data System (ADS)

    Crescimanno, Michael; Dawson, Nathan J.; Andrews, James H.

    2012-09-01

    Two classes of conservative, linear, optical rotary effects (optical activity and Faraday rotation) are distinguished by their behavior under time reversal. Faraday rotation, but not optical activity, is capable of coherent perfect rotation, by which we mean the complete transfer of counterpropagating coherent light fields into their orthogonal polarization. Unlike coherent perfect absorption, however, this process is explicitly energy conserving and reversible. Our study highlights the necessity of time-reversal-odd processes (not just absorption) and coherence in perfect mode conversion and thus informs the optimization of active multiport optical devices.

  5. Bayesian Learning and the Psychology of Rule Induction

    ERIC Educational Resources Information Center

    Endress, Ansgar D.

    2013-01-01

    In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to…

  6. Properties of the Bayesian Knowledge Tracing Model

    ERIC Educational Resources Information Center

    van de Sande, Brett

    2013-01-01

    Bayesian Knowledge Tracing is used very widely to model student learning. It comes in two different forms: The first form is the Bayesian Knowledge Tracing "hidden Markov model" which predicts the probability of correct application of a skill as a function of the number of previous opportunities to apply that skill and the model…

  7. Bayesian Item Selection in Constrained Adaptive Testing Using Shadow Tests

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…

  8. Bayesian Analysis of Longitudinal Data Using Growth Curve Models

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Hamagami, Fumiaki; Wang, Lijuan Lijuan; Nesselroade, John R.; Grimm, Kevin J.

    2007-01-01

    Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data…

  9. Bayesian Asymmetric Regression as a Means to Estimate and Evaluate Oral Reading Fluency Slopes

    ERIC Educational Resources Information Center

    Solomon, Benjamin G.; Forsberg, Ole J.

    2017-01-01

    Bayesian techniques have become increasingly present in the social sciences, fueled by advances in computer speed and the development of user-friendly software. In this paper, we forward the use of Bayesian Asymmetric Regression (BAR) to monitor intervention responsiveness when using Curriculum-Based Measurement (CBM) to assess oral reading…

  10. A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…

  11. Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation

    ERIC Educational Resources Information Center

    Ross, Steven J.; Mackey, Beth

    2015-01-01

    This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…

  12. Teaching Bayesian Statistics to Undergraduate Students through Debates

    ERIC Educational Resources Information Center

    Stewart, Sepideh; Stewart, Wayne

    2014-01-01

    This paper describes a lecturer's approach to teaching Bayesian statistics to students who were only exposed to the classical paradigm. The study shows how the lecturer extended himself by making use of ventriloquist dolls to grab hold of students' attention and embed important ideas in revealing the differences between the Bayesian and classical…

  13. Bayesian generalized linear mixed modeling of Tuberculosis using informative priors

    PubMed Central

    Woldegerima, Woldegebriel Assefa

    2017-01-01

    TB is rated as one of the world’s deadliest diseases and South Africa ranks 9th out of the 22 countries with hardest hit of TB. Although many pieces of research have been carried out on this subject, this paper steps further by inculcating past knowledge into the model, using Bayesian approach with informative prior. Bayesian statistics approach is getting popular in data analyses. But, most applications of Bayesian inference technique are limited to situations of non-informative prior, where there is no solid external information about the distribution of the parameter of interest. The main aim of this study is to profile people living with TB in South Africa. In this paper, identical regression models are fitted for classical and Bayesian approach both with non-informative and informative prior, using South Africa General Household Survey (GHS) data for the year 2014. For the Bayesian model with informative prior, South Africa General Household Survey dataset for the year 2011 to 2013 are used to set up priors for the model 2014. PMID:28257437

  14. Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.

    PubMed

    Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim

    2017-12-01

    The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    PubMed

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  16. Introduction to Bayesian statistical approaches to compositional analyses of transgenic crops 1. Model validation and setting the stage.

    PubMed

    Harrison, Jay M; Breeze, Matthew L; Harrigan, George G

    2011-08-01

    Statistical comparisons of compositional data generated on genetically modified (GM) crops and their near-isogenic conventional (non-GM) counterparts typically rely on classical significance testing. This manuscript presents an introduction to Bayesian methods for compositional analysis along with recommendations for model validation. The approach is illustrated using protein and fat data from two herbicide tolerant GM soybeans (MON87708 and MON87708×MON89788) and a conventional comparator grown in the US in 2008 and 2009. Guidelines recommended by the US Food and Drug Administration (FDA) in conducting Bayesian analyses of clinical studies on medical devices were followed. This study is the first Bayesian approach to GM and non-GM compositional comparisons. The evaluation presented here supports a conclusion that a Bayesian approach to analyzing compositional data can provide meaningful and interpretable results. We further describe the importance of method validation and approaches to model checking if Bayesian approaches to compositional data analysis are to be considered viable by scientists involved in GM research and regulation. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Comparing spatially varying coefficient models: a case study examining violent crime rates and their relationships to alcohol outlets and illegal drug arrests

    NASA Astrophysics Data System (ADS)

    Wheeler, David C.; Waller, Lance A.

    2009-03-01

    In this paper, we compare and contrast a Bayesian spatially varying coefficient process (SVCP) model with a geographically weighted regression (GWR) model for the estimation of the potentially spatially varying regression effects of alcohol outlets and illegal drug activity on violent crime in Houston, Texas. In addition, we focus on the inherent coefficient shrinkage properties of the Bayesian SVCP model as a way to address increased coefficient variance that follows from collinearity in GWR models. We outline the advantages of the Bayesian model in terms of reducing inflated coefficient variance, enhanced model flexibility, and more formal measuring of model uncertainty for prediction. We find spatially varying effects for alcohol outlets and drug violations, but the amount of variation depends on the type of model used. For the Bayesian model, this variation is controllable through the amount of prior influence placed on the variance of the coefficients. For example, the spatial pattern of coefficients is similar for the GWR and Bayesian models when a relatively large prior variance is used in the Bayesian model.

  18. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    DOE PAGES

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less

  19. Testing students’ e-learning via Facebook through Bayesian structural equation modeling

    PubMed Central

    Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students’ intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods’ results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated. PMID:28886019

  20. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

Top