Sample records for nonlinear markov processes

  1. Nonlinear Markov Control Processes and Games

    DTIC Science & Technology

    2012-11-15

    the analysis of a new class of stochastic games , nonlinear Markov games , as they arise as a ( competitive ) controlled version of nonlinear Markov... competitive interests) a nonlinear Markov game that we are investigating. I 0. :::tUt::JJt:.l.. I I t:t11VI;:, nonlinear Markov game , nonlinear Markov...corresponding stochastic game Γ+(T, h). In a slightly different setting one can assume that changes in a competitive control process occur as a

  2. NonMarkov Ito Processes with 1- state memory

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2010-08-01

    A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.

  3. A novel framework to simulating non-stationary, non-linear, non-Normal hydrological time series using Markov Switching Autoregressive Models

    NASA Astrophysics Data System (ADS)

    Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.

    2012-12-01

    In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.

  4. Recombination Processes and Nonlinear Markov Chains.

    PubMed

    Pirogov, Sergey; Rybko, Alexander; Kalinina, Anastasia; Gelfand, Mikhail

    2016-09-01

    Bacteria are known to exchange genetic information by horizontal gene transfer. Since the frequency of homologous recombination depends on the similarity between the recombining segments, several studies examined whether this could lead to the emergence of subspecies. Most of them simulated fixed-size Wright-Fisher populations, in which the genetic drift should be taken into account. Here, we use nonlinear Markov processes to describe a bacterial population evolving under mutation and recombination. We consider a population structure as a probability measure on the space of genomes. This approach implies the infinite population size limit, and thus, the genetic drift is not assumed. We prove that under these conditions, the emergence of subspecies is impossible.

  5. Open Markov Processes and Reaction Networks

    NASA Astrophysics Data System (ADS)

    Swistock Pollard, Blake Stephen

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  6. Processing and Conversion of Algae to Bioethanol

    NASA Astrophysics Data System (ADS)

    Kampfe, Sara Katherine

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  7. Net Surface Flux Budget Over Tropical Oceans Estimated from the Tropical Rainfall Measuring Mission (TRMM)

    NASA Astrophysics Data System (ADS)

    Fan, Tai-Fang

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  8. Magneto - Optical Imaging of Superconducting MgB2 Thin Films

    NASA Astrophysics Data System (ADS)

    Hummert, Stephanie Maria

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  9. Boron Carbide Filled Neutron Shielding Textile Polymers

    NASA Astrophysics Data System (ADS)

    Manzlak, Derrick Anthony

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  10. Parallel Unstructured Grid Generation for Complex Real-World Aerodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Zagaris, George

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  11. Polymeric Radiation Shielding for Applications in Space: Polyimide Synthesis and Modeling of Multi-Layered Polymeric Shields

    NASA Astrophysics Data System (ADS)

    Schiavone, Clinton Cleveland

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  12. The Development of the CALIPSO LiDAR Simulator

    NASA Astrophysics Data System (ADS)

    Powell, Kathleen A.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  13. Exploring a Novel Approach to Technical Nuclear Forensics Utilizing Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Peeke, Richard Scot

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  14. Modeling of Critically-Stratified Gravity Flows: Application to the Eel River Continental Shelf, Northern California

    NASA Astrophysics Data System (ADS)

    Scully, Malcolm E.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  15. Production of Cyclohexylene-Containing Diamines in Pursuit of Novel Radiation Shielding Materials

    NASA Astrophysics Data System (ADS)

    Bate, Norah G.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  16. Development of Boron-Containing Polyimide Materials and Poly(arylene Ether)s for Radiation Shielding

    NASA Astrophysics Data System (ADS)

    Collins, Brittani May

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  17. Magnetization Dynamics and Anisotropy in Ferromagnetic/Antiferromagnetic Ni/NiO Bilayers

    NASA Astrophysics Data System (ADS)

    Petersen, Andreas

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  18. SOS based robust H(∞) fuzzy dynamic output feedback control of nonlinear networked control systems.

    PubMed

    Chae, Seunghwan; Nguang, Sing Kiong

    2014-07-01

    In this paper, a methodology for designing a fuzzy dynamic output feedback controller for discrete-time nonlinear networked control systems is presented where the nonlinear plant is modelled by a Takagi-Sugeno fuzzy model and the network-induced delays by a finite state Markov process. The transition probability matrix for the Markov process is allowed to be partially known, providing a more practical consideration of the real world. Furthermore, the fuzzy controller's membership functions and premise variables are not assumed to be the same as the plant's membership functions and premise variables, that is, the proposed approach can handle the case, when the premise of the plant are not measurable or delayed. The membership functions of the plant and the controller are approximated as polynomial functions, then incorporated into the controller design. Sufficient conditions for the existence of the controller are derived in terms of sum of square inequalities, which are then solved by YALMIP. Finally, a numerical example is used to demonstrate the validity of the proposed methodology.

  19. Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo

    PubMed Central

    Golightly, Andrew; Wilkinson, Darren J.

    2011-01-01

    Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583

  20. High-Performance Nanocomposites Designed for Radiation Shielding in Space and an Application of GIS for Analyzing Nanopowder Dispersion in Polymer Matrixes

    NASA Astrophysics Data System (ADS)

    Auslander, Joseph Simcha

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  1. Time-Resolved Magneto-Optical Imaging of Superconducting YBCO Thin Films in the High-Frequency AC Current Regime

    NASA Astrophysics Data System (ADS)

    Frey, Alexander

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  2. Use of Remote Sensing to Identify Essential Habitat for Aeschynomene virginica (L.) BSP, a Threatened Tidal Freshwater Wetland Plant

    NASA Astrophysics Data System (ADS)

    Mountz, Elizabeth M.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  3. Silver-Polyimide Nanocomposite Films: Single-Stage Synthesis and Analysis of Metalized Partially-Fluorinated Polyimide BTDA/4-BDAF Prepared from Silver(I) Complexes

    NASA Astrophysics Data System (ADS)

    Abelard, Joshua Erold Robert

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  4. Multifunctional Polymer Synthesis and Incorporation of Gadolinium Compounds and Modified Tungsten Nanoparticles for Improvement of Radiation Shielding for use in Outer Space

    NASA Astrophysics Data System (ADS)

    Harbert, Emily Grace

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  5. Nonlinear Stochastic Markov Processes and Modeling Uncertainty in Populations

    DTIC Science & Technology

    2011-07-06

    219–232. [26] I. Karatzas and S.E. Shreve, Brownian Motion and Stochastic Calculus, Second Edition, Springer, New York, 1991. [27] F. Klebaner...ubiquitous in mathematics and physics (e.g., particle transport, filtering), biology (population models), finance (e.g., Black-Scholes equations) among other

  6. Combined state and parameter identification of nonlinear structural dynamical systems based on Rao-Blackwellization and Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Abhinav, S.; Manohar, C. S.

    2018-03-01

    The problem of combined state and parameter estimation in nonlinear state space models, based on Bayesian filtering methods, is considered. A novel approach, which combines Rao-Blackwellized particle filters for state estimation with Markov chain Monte Carlo (MCMC) simulations for parameter identification, is proposed. In order to ensure successful performance of the MCMC samplers, in situations involving large amount of dynamic measurement data and (or) low measurement noise, the study employs a modified measurement model combined with an importance sampling based correction. The parameters of the process noise covariance matrix are also included as quantities to be identified. The study employs the Rao-Blackwellization step at two stages: one, associated with the state estimation problem in the particle filtering step, and, secondly, in the evaluation of the ratio of likelihoods in the MCMC run. The satisfactory performance of the proposed method is illustrated on three dynamical systems: (a) a computational model of a nonlinear beam-moving oscillator system, (b) a laboratory scale beam traversed by a loaded trolley, and (c) an earthquake shake table study on a bending-torsion coupled nonlinear frame subjected to uniaxial support motion.

  7. Recursive utility in a Markov environment with stochastic growth

    PubMed Central

    Hansen, Lars Peter; Scheinkman, José A.

    2012-01-01

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428

  8. Recursive utility in a Markov environment with stochastic growth.

    PubMed

    Hansen, Lars Peter; Scheinkman, José A

    2012-07-24

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

  9. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  10. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  11. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  12. Filtering Using Nonlinear Expectations

    DTIC Science & Technology

    2016-04-16

    gives a solution to estimating a Markov chain observed in Gaussian noise when the variance of the noise is unkown. This paper is accepted for the IEEE...Optimization, an A* journal. A short third paper discusses how to estimate a change in the transition dynamics of a noisily observed Markov chain ...The change point time is hidden in a hidden Markov chain , so a second level of discovery is involved. This paper is accepted for Communications in

  13. The cardiorespiratory interaction: a nonlinear stochastic model and its synchronization properties

    NASA Astrophysics Data System (ADS)

    Bahraminasab, A.; Kenwright, D.; Stefanovska, A.; McClintock, P. V. E.

    2007-06-01

    We address the problem of interactions between the phase of cardiac and respiration oscillatory components. The coupling between these two quantities is experimentally investigated by the theory of stochastic Markovian processes. The so-called Markov analysis allows us to derive nonlinear stochastic equations for the reconstruction of the cardiorespiratory signals. The properties of these equations provide interesting new insights into the strength and direction of coupling which enable us to divide the couplings to two parts: deterministic and stochastic. It is shown that the synchronization behaviors of the reconstructed signals are statistically identical with original one.

  14. Nonlinear fluctuations-induced rate equations for linear birth-death processes

    NASA Astrophysics Data System (ADS)

    Honkonen, J.

    2008-05-01

    The Fock-space approach to the solution of master equations for one-step Markov processes is reconsidered. It is shown that in birth-death processes with an absorbing state at the bottom of the occupation-number spectrum and occupation-number independent annihilation probability of occupation-number fluctuations give rise to rate equations drastically different from the polynomial form typical of birth-death processes. The fluctuation-induced rate equations with the characteristic exponential terms are derived for Mikhailov’s ecological model and Lanchester’s model of modern warfare.

  15. Metastability of Queuing Networks with Mobile Servers

    NASA Astrophysics Data System (ADS)

    Baccelli, F.; Rybko, A.; Shlosman, S.; Vladimirov, A.

    2018-04-01

    We study symmetric queuing networks with moving servers and FIFO service discipline. The mean-field limit dynamics demonstrates unexpected behavior which we attribute to the metastability phenomenon. Large enough finite symmetric networks on regular graphs are proved to be transient for arbitrarily small inflow rates. However, the limiting non-linear Markov process possesses at least two stationary solutions. The proof of transience is based on martingale techniques.

  16. Variance-reduced simulation of lattice discrete-time Markov chains with applications in reaction networks

    NASA Astrophysics Data System (ADS)

    Maginnis, P. A.; West, M.; Dullerud, G. E.

    2016-10-01

    We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.

  17. Dynamical processes and epidemic threshold on nonlinear coupled multiplex networks

    NASA Astrophysics Data System (ADS)

    Gao, Chao; Tang, Shaoting; Li, Weihua; Yang, Yaqian; Zheng, Zhiming

    2018-04-01

    Recently, the interplay between epidemic spreading and awareness diffusion has aroused the interest of many researchers, who have studied models mainly based on linear coupling relations between information and epidemic layers. However, in real-world networks the relation between two layers may be closely correlated with the property of individual nodes and exhibits nonlinear dynamical features. Here we propose a nonlinear coupled information-epidemic model (I-E model) and present a comprehensive analysis in a more generalized scenario where the upload rate differs from node to node, deletion rate varies between susceptible and infected states, and infection rate changes between unaware and aware states. In particular, we develop a theoretical framework of the intra- and inter-layer dynamical processes with a microscopic Markov chain approach (MMCA), and derive an analytic epidemic threshold. Our results suggest that the change of upload and deletion rate has little effect on the diffusion dynamics in the epidemic layer.

  18. Green functions and Langevin equations for nonlinear diffusion equations: A comment on ‘Markov processes, Hurst exponents, and nonlinear diffusion equations’ by Bassler et al.

    NASA Astrophysics Data System (ADS)

    Frank, T. D.

    2008-02-01

    We discuss two central claims made in the study by Bassler et al. [K.E. Bassler, G.H. Gunaratne, J.L. McCauley, Physica A 369 (2006) 343]. Bassler et al. claimed that Green functions and Langevin equations cannot be defined for nonlinear diffusion equations. In addition, they claimed that nonlinear diffusion equations are linear partial differential equations disguised as nonlinear ones. We review bottom-up and top-down approaches that have been used in the literature to derive Green functions for nonlinear diffusion equations and, in doing so, show that the first claim needs to be revised. We show that the second claim as well needs to be revised. To this end, we point out similarities and differences between non-autonomous linear Fokker-Planck equations and autonomous nonlinear Fokker-Planck equations. In this context, we raise the question whether Bassler et al.’s approach to financial markets is physically plausible because it necessitates the introduction of external traders and causes. Such external entities can easily be eliminated when taking self-organization principles and concepts of nonextensive thermostatistics into account and modeling financial processes by means of nonlinear Fokker-Planck equations.

  19. Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995

    NASA Technical Reports Server (NTRS)

    Blerman, Gregory S.

    1995-01-01

    Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.

  20. An accurate nonlinear stochastic model for MEMS-based inertial sensor error with wavelet networks

    NASA Astrophysics Data System (ADS)

    El-Diasty, Mohammed; El-Rabbany, Ahmed; Pagiatakis, Spiros

    2007-12-01

    The integration of Global Positioning System (GPS) with Inertial Navigation System (INS) has been widely used in many applications for positioning and orientation purposes. Traditionally, random walk (RW), Gauss-Markov (GM), and autoregressive (AR) processes have been used to develop the stochastic model in classical Kalman filters. The main disadvantage of classical Kalman filter is the potentially unstable linearization of the nonlinear dynamic system. Consequently, a nonlinear stochastic model is not optimal in derivative-based filters due to the expected linearization error. With a derivativeless-based filter such as the unscented Kalman filter or the divided difference filter, the filtering process of a complicated highly nonlinear dynamic system is possible without linearization error. This paper develops a novel nonlinear stochastic model for inertial sensor error using a wavelet network (WN). A wavelet network is a highly nonlinear model, which has recently been introduced as a powerful tool for modelling and prediction. Static and kinematic data sets are collected using a MEMS-based IMU (DQI-100) to develop the stochastic model in the static mode and then implement it in the kinematic mode. The derivativeless-based filtering method using GM, AR, and the proposed WN-based processes are used to validate the new model. It is shown that the first-order WN-based nonlinear stochastic model gives superior positioning results to the first-order GM and AR models with an overall improvement of 30% when 30 and 60 seconds GPS outages are introduced.

  1. Enhancing Data Assimilation by Evolutionary Particle Filter and Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Moradkhani, H.; Abbaszadeh, P.; Yan, H.

    2016-12-01

    Particle Filters (PFs) have received increasing attention by the researchers from different disciplines in hydro-geosciences as an effective method to improve model predictions in nonlinear and non-Gaussian dynamical systems. The implication of dual state and parameter estimation by means of data assimilation in hydrology and geoscience has evolved since 2005 from SIR-PF to PF-MCMC and now to the most effective and robust framework through evolutionary PF approach based on Genetic Algorithm (GA) and Markov Chain Monte Carlo (MCMC), the so-called EPF-MCMC. In this framework, the posterior distribution undergoes an evolutionary process to update an ensemble of prior states that more closely resemble realistic posterior probability distribution. The premise of this approach is that the particles move to optimal position using the GA optimization coupled with MCMC increasing the number of effective particles, hence the particle degeneracy is avoided while the particle diversity is improved. The proposed algorithm is applied on a conceptual and highly nonlinear hydrologic model and the effectiveness, robustness and reliability of the method in jointly estimating the states and parameters and also reducing the uncertainty is demonstrated for few river basins across the United States.

  2. Multifractal analysis of time series generated by discrete Ito equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Telesca, Luciano; Czechowski, Zbigniew; Lovallo, Michele

    2015-06-15

    In this study, we show that discrete Ito equations with short-tail Gaussian marginal distribution function generate multifractal time series. The multifractality is due to the nonlinear correlations, which are hidden in Markov processes and are generated by the interrelation between the drift and the multiplicative stochastic forces in the Ito equation. A link between the range of the generalized Hurst exponents and the mean of the squares of all averaged net forces is suggested.

  3. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  4. A baker's dozen of new particle flows for nonlinear filters, Bayesian decisions and transport

    NASA Astrophysics Data System (ADS)

    Daum, Fred; Huang, Jim

    2015-05-01

    We describe a baker's dozen of new particle flows to compute Bayes' rule for nonlinear filters, Bayesian decisions and learning as well as transport. Several of these new flows were inspired by transport theory, but others were inspired by physics or statistics or Markov chain Monte Carlo methods.

  5. A class of generalized Ginzburg-Landau equations with random switching

    NASA Astrophysics Data System (ADS)

    Wu, Zheng; Yin, George; Lei, Dongxia

    2018-09-01

    This paper focuses on a class of generalized Ginzburg-Landau equations with random switching. In our formulation, the nonlinear term is allowed to have higher polynomial growth rate than the usual cubic polynomials. The random switching is modeled by a continuous-time Markov chain with a finite state space. First, an explicit solution is obtained. Then properties such as stochastic-ultimate boundedness and permanence of the solution processes are investigated. Finally, two-time-scale models are examined leading to a reduction of complexity.

  6. Modeling and Properties of Nonlinear Stochastic Dynamical System of Continuous Culture

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Feng, Enmin; Ye, Jianxiong; Xiu, Zhilong

    The stochastic counterpart to the deterministic description of continuous fermentation with ordinary differential equation is investigated in the process of glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae. We briefly discuss the continuous fermentation process driven by three-dimensional Brownian motion and Lipschitz coefficients, which is suitable for the factual fermentation. Subsequently, we study the existence and uniqueness of solutions for the stochastic system as well as the boundedness of the Two-order Moment and the Markov property of the solution. Finally stochastic simulation is carried out under the Stochastic Euler-Maruyama method.

  7. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  8. Time-domain induced polarization - an analysis of Cole-Cole parameter resolution and correlation using Markov Chain Monte Carlo inversion

    NASA Astrophysics Data System (ADS)

    Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest

    2017-12-01

    The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.

  9. Enhancing hydrologic data assimilation by evolutionary Particle Filter and Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Abbaszadeh, Peyman; Moradkhani, Hamid; Yan, Hongxiang

    2018-01-01

    Particle Filters (PFs) have received increasing attention by researchers from different disciplines including the hydro-geosciences, as an effective tool to improve model predictions in nonlinear and non-Gaussian dynamical systems. The implication of dual state and parameter estimation using the PFs in hydrology has evolved since 2005 from the PF-SIR (sampling importance resampling) to PF-MCMC (Markov Chain Monte Carlo), and now to the most effective and robust framework through evolutionary PF approach based on Genetic Algorithm (GA) and MCMC, the so-called EPFM. In this framework, the prior distribution undergoes an evolutionary process based on the designed mutation and crossover operators of GA. The merit of this approach is that the particles move to an appropriate position by using the GA optimization and then the number of effective particles is increased by means of MCMC, whereby the particle degeneracy is avoided and the particle diversity is improved. In this study, the usefulness and effectiveness of the proposed EPFM is investigated by applying the technique on a conceptual and highly nonlinear hydrologic model over four river basins located in different climate and geographical regions of the United States. Both synthetic and real case studies demonstrate that the EPFM improves both the state and parameter estimation more effectively and reliably as compared with the PF-MCMC.

  10. A compositional framework for Markov processes

    NASA Astrophysics Data System (ADS)

    Baez, John C.; Fong, Brendan; Pollard, Blake S.

    2016-03-01

    We define the concept of an "open" Markov process, or more precisely, continuous-time Markov chain, which is one where probability can flow in or out of certain states called "inputs" and "outputs." One can build up a Markov process from smaller open pieces. This process is formalized by making open Markov processes into the morphisms of a dagger compact category. We show that the behavior of a detailed balanced open Markov process is determined by a principle of minimum dissipation, closely related to Prigogine's principle of minimum entropy production. Using this fact, we set up a functor mapping open detailed balanced Markov processes to open circuits made of linear resistors. We also describe how to "black box" an open Markov process, obtaining the linear relation between input and output data that holds in any steady state, including nonequilibrium steady states with a nonzero flow of probability through the system. We prove that black boxing gives a symmetric monoidal dagger functor sending open detailed balanced Markov processes to Lagrangian relations between symplectic vector spaces. This allows us to compute the steady state behavior of an open detailed balanced Markov process from the behaviors of smaller pieces from which it is built. We relate this black box functor to a previously constructed black box functor for circuits.

  11. Comparison of RF spectrum prediction methods for dynamic spectrum access

    NASA Astrophysics Data System (ADS)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  12. Canonical Structure and Orthogonality of Forces and Currents in Irreversible Markov Chains

    NASA Astrophysics Data System (ADS)

    Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes

    2018-03-01

    We discuss a canonical structure that provides a unifying description of dynamical large deviations for irreversible finite state Markov chains (continuous time), Onsager theory, and Macroscopic Fluctuation Theory (MFT). For Markov chains, this theory involves a non-linear relation between probability currents and their conjugate forces. Within this framework, we show how the forces can be split into two components, which are orthogonal to each other, in a generalised sense. This splitting allows a decomposition of the pathwise rate function into three terms, which have physical interpretations in terms of dissipation and convergence to equilibrium. Similar decompositions hold for rate functions at level 2 and level 2.5. These results clarify how bounds on entropy production and fluctuation theorems emerge from the underlying dynamical rules. We discuss how these results for Markov chains are related to similar structures within MFT, which describes hydrodynamic limits of such microscopic models.

  13. Open Markov Processes and Reaction Networks

    ERIC Educational Resources Information Center

    Swistock Pollard, Blake Stephen

    2017-01-01

    We begin by defining the concept of "open" Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain "boundary" states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow…

  14. Restoration of Static JPEG Images and RGB Video Frames by Means of Nonlinear Filtering in Conditions of Gaussian and Non-Gaussian Noise

    NASA Astrophysics Data System (ADS)

    Sokolov, R. I.; Abdullin, R. R.

    2017-11-01

    The use of nonlinear Markov process filtering makes it possible to restore both video stream frames and static photos at the stage of preprocessing. The present paper reflects the results of research in comparison of these types image filtering quality by means of special algorithm when Gaussian or non-Gaussian noises acting. Examples of filter operation at different values of signal-to-noise ratio are presented. A comparative analysis has been performed, and the best filtered kind of noise has been defined. It has been shown the quality of developed algorithm is much better than quality of adaptive one for RGB signal filtering at the same a priori information about the signal. Also, an advantage over median filter takes a place when both fluctuation and pulse noise filtering.

  15. Activation rates for nonlinear stochastic flows driven by non-Gaussian noise

    NASA Astrophysics Data System (ADS)

    van den Broeck, C.; Hänggi, P.

    1984-11-01

    Activation rates are calculated for stochastic bistable flows driven by asymmetric dichotomic Markov noise (a two-state Markov process). This noise contains as limits both a particular type of non-Gaussian white shot noise and white Gaussian noise. Apart from investigating the role of colored noise on the escape rates, one can thus also study the influence of the non-Gaussian nature of the noise on these rates. The rate for white shot noise differs in leading order (Arrhenius factor) from the corresponding rate for white Gaussian noise of equal strength. In evaluating the rates we demonstrate the advantage of using transport theory over a mean first-passage time approach for cases with generally non-white and non-Gaussian noise sources. For white shot noise with exponentially distributed weights we succeed in evaluating the mean first-passage time of the corresponding integro-differential master-equation dynamics. The rate is shown to coincide in the weak noise limit with the inverse mean first-passage time.

  16. The explicit form of the rate function for semi-Markov processes and its contractions

    NASA Astrophysics Data System (ADS)

    Sughiyama, Yuki; Kobayashi, Testuya J.

    2018-03-01

    We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.

  17. Modeling of dialogue regimes of distance robot control

    NASA Astrophysics Data System (ADS)

    Larkin, E. V.; Privalov, A. N.

    2017-02-01

    Process of distance control of mobile robots is investigated. Petri-Markov net for modeling of dialogue regime is worked out. It is shown, that sequence of operations of next subjects: a human operator, a dialogue computer and an onboard computer may be simulated with use the theory of semi-Markov processes. From the semi-Markov process of the general form Markov process was obtained, which includes only states of transaction generation. It is shown, that a real transaction flow is the result of «concurrency» in states of Markov process. Iteration procedure for evaluation of transaction flow parameters, which takes into account effect of «concurrency», is proposed.

  18. Non-Linear Dynamics of Saturn’s Rings

    NASA Astrophysics Data System (ADS)

    Esposito, Larry W.

    2015-11-01

    Non-linear processes can explain why Saturn’s rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. We find that stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, pushing the system across thresholds that lead to persistent states.Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity, and allows aggregates to grow. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit.Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like ‘straw’ that can explain the halo structure and spectroscopy: This requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping at perturbed regions in Saturn’s rings creates both high velocity dispersion and large aggregates at these distances, explaining both small and large particles observed there. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating the Markov chain as an asymmetric random walk with reflecting boundaries allows us to determine the power law index from results of numerical simulations in the tidal environment surrounding Saturn. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn’s rings a chaotic non-linear driven system?

  19. Stochastic thermodynamics across scales: Emergent inter-attractoral discrete Markov jump process and its underlying continuous diffusion

    NASA Astrophysics Data System (ADS)

    Santillán, Moisés; Qian, Hong

    2013-01-01

    We investigate the internal consistency of a recently developed mathematical thermodynamic structure across scales, between a continuous stochastic nonlinear dynamical system, i.e., a diffusion process with Langevin and Fokker-Planck equations, and its emergent discrete, inter-attractoral Markov jump process. We analyze how the system’s thermodynamic state functions, e.g. free energy F, entropy S, entropy production ep, free energy dissipation Ḟ, etc., are related when the continuous system is described with coarse-grained discrete variables. It is shown that the thermodynamics derived from the underlying, detailed continuous dynamics gives rise to exactly the free-energy representation of Gibbs and Helmholtz. That is, the system’s thermodynamic structure is the same as if one only takes a middle road and starts with the natural discrete description, with the corresponding transition rates empirically determined. By natural we mean in the thermodynamic limit of a large system, with an inherent separation of time scales between inter- and intra-attractoral dynamics. This result generalizes a fundamental idea from chemistry, and the theory of Kramers, by incorporating thermodynamics: while a mechanical description of a molecule is in terms of continuous bond lengths and angles, chemical reactions are phenomenologically described by a discrete representation, in terms of exponential rate laws and a stochastic thermodynamics.

  20. On a Result for Finite Markov Chains

    ERIC Educational Resources Information Center

    Kulathinal, Sangita; Ghosh, Lagnojita

    2006-01-01

    In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…

  1. Master equation for She-Leveque scaling and its classification in terms of other Markov models of developed turbulence

    NASA Astrophysics Data System (ADS)

    Nickelsen, Daniel

    2017-07-01

    The statistics of velocity increments in homogeneous and isotropic turbulence exhibit universal features in the limit of infinite Reynolds numbers. After Kolmogorov’s scaling law from 1941, many turbulence models aim for capturing these universal features, some are known to have an equivalent formulation in terms of Markov processes. We derive the Markov process equivalent to the particularly successful scaling law postulated by She and Leveque. The Markov process is a jump process for velocity increments u(r) in scale r in which the jumps occur randomly but with deterministic width in u. From its master equation we establish a prescription to simulate the She-Leveque process and compare it with Kolmogorov scaling. To put the She-Leveque process into the context of other established turbulence models on the Markov level, we derive a diffusion process for u(r) using two properties of the Navier-Stokes equation. This diffusion process already includes Kolmogorov scaling, extended self-similarity and a class of random cascade models. The fluctuation theorem of this Markov process implies a ‘second law’ that puts a loose bound on the multipliers of the random cascade models. This bound explicitly allows for instances of inverse cascades, which are necessary to satisfy the fluctuation theorem. By adding a jump process to the diffusion process, we go beyond Kolmogorov scaling and formulate the most general scaling law for the class of Markov processes having both diffusion and jump parts. This Markov scaling law includes She-Leveque scaling and a scaling law derived by Yakhot.

  2. Appraisal of jump distributions in ensemble-based sampling algorithms

    NASA Astrophysics Data System (ADS)

    Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo

    2017-04-01

    Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.

  3. Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)

    NASA Technical Reports Server (NTRS)

    Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV

    1988-01-01

    The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.

  4. Bayesian selection of Markov models for symbol sequences: application to microsaccadic eye movements.

    PubMed

    Bettenbühl, Mario; Rusconi, Marco; Engbert, Ralf; Holschneider, Matthias

    2012-01-01

    Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.

  5. Modeling aircraft noise induced sleep disturbance

    NASA Astrophysics Data System (ADS)

    McGuire, Sarah M.

    One of the primary impacts of aircraft noise on a community is its disruption of sleep. Aircraft noise increases the time to fall asleep, the number of awakenings, and decreases the amount of rapid eye movement and slow wave sleep. Understanding these changes in sleep may be important as they could increase the risk for developing next-day effects such as sleepiness and reduced performance and long-term health effects such as cardiovascular disease. There are models that have been developed to predict the effect of aircraft noise on sleep. However, most of these models only predict the percentage of the population that is awakened. Markov and nonlinear dynamic models have been developed to predict an individual's sleep structure during the night. However, both of these models have limitations. The Markov model only accounts for whether an aircraft event occurred not the noise level or other sound characteristics of the event that may affect the degree of disturbance. The nonlinear dynamic models were developed to describe normal sleep regulation and do not have a noise effects component. In addition, the nonlinear dynamic models have slow dynamics which make it difficult to predict short duration awakenings which occur both spontaneously and as a result of nighttime noise exposure. The purpose of this research was to examine these sleep structure models to determine how they could be altered to predict the effect of aircraft noise on sleep. Different approaches for adding a noise level dependence to the Markov Model was explored and the modified model was validated by comparing predictions to behavioral awakening data. In order to determine how to add faster dynamics to the nonlinear dynamic sleep models it was necessary to have a more detailed sleep stage classification than was available from visual scoring of sleep data. An automatic sleep stage classification algorithm was developed which extracts different features of polysomnography data including the occurrence of rapid eye movements, sleep spindles, and slow wave sleep. Using these features an approach for classifying sleep stages every one second during the night was developed. From observation of the results of the sleep stage classification, it was determined how to add faster dynamics to the nonlinear dynamic model. Slow and fast REM activity are modeled separately and the activity in the gamma frequency band of the EEG signal is used to model both spontaneous and noise-induced awakenings. The nonlinear model predicts changes in sleep structure similar to those found by other researchers and reported in the sleep literature and similar to those found in obtained survey data. To compare sleep disturbance model predictions, flight operations data from US airports were obtained and sleep disturbance in communities was predicted for different operations scenarios using the modified Markov model, the nonlinear dynamic model, and other aircraft noise awakening models. Similarities and differences in model predictions were evaluated in order to determine if the use of the developed sleep structure model leads to improved predictions of the impact of nighttime noise on communities.

  6. Metrics for Labeled Markov Systems

    NASA Technical Reports Server (NTRS)

    Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash

    1999-01-01

    Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.

  7. Analysing grouping of nucleotides in DNA sequences using lumped processes constructed from Markov chains.

    PubMed

    Guédon, Yann; d'Aubenton-Carafa, Yves; Thermes, Claude

    2006-03-01

    The most commonly used models for analysing local dependencies in DNA sequences are (high-order) Markov chains. Incorporating knowledge relative to the possible grouping of the nucleotides enables to define dedicated sub-classes of Markov chains. The problem of formulating lumpability hypotheses for a Markov chain is therefore addressed. In the classical approach to lumpability, this problem can be formulated as the determination of an appropriate state space (smaller than the original state space) such that the lumped chain defined on this state space retains the Markov property. We propose a different perspective on lumpability where the state space is fixed and the partitioning of this state space is represented by a one-to-many probabilistic function within a two-level stochastic process. Three nested classes of lumped processes can be defined in this way as sub-classes of first-order Markov chains. These lumped processes enable parsimonious reparameterizations of Markov chains that help to reveal relevant partitions of the state space. Characterizations of the lumped processes on the original transition probability matrix are derived. Different model selection methods relying either on hypothesis testing or on penalized log-likelihood criteria are presented as well as extensions to lumped processes constructed from high-order Markov chains. The relevance of the proposed approach to lumpability is illustrated by the analysis of DNA sequences. In particular, the use of lumped processes enables to highlight differences between intronic sequences and gene untranslated region sequences.

  8. Markov-modulated Markov chains and the covarion process of molecular evolution.

    PubMed

    Galtier, N; Jean-Marie, A

    2004-01-01

    The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.

  9. Machine learning in sentiment reconstruction of the simulated stock market

    NASA Astrophysics Data System (ADS)

    Goykhman, Mikhail; Teimouri, Ali

    2018-02-01

    In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.

  10. Response statistics of rotating shaft with non-linear elastic restoring forces by path integration

    NASA Astrophysics Data System (ADS)

    Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael

    2017-07-01

    Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.

  11. Finite-sample and asymptotic sign-based tests for parameters of non-linear quantile regression with Markov noise

    NASA Astrophysics Data System (ADS)

    Sirenko, M. A.; Tarasenko, P. F.; Pushkarev, M. I.

    2017-01-01

    One of the most noticeable features of sign-based statistical procedures is an opportunity to build an exact test for simple hypothesis testing of parameters in a regression model. In this article, we expanded a sing-based approach to the nonlinear case with dependent noise. The examined model is a multi-quantile regression, which makes it possible to test hypothesis not only of regression parameters, but of noise parameters as well.

  12. A comparison between MS-VECM and MS-VECMX on economic time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Wai; Ismail, Mohd Tahir; Sek, Siok-Kun

    2014-07-01

    Multivariate Markov switching models able to provide useful information on the study of structural change data since the regime switching model can analyze the time varying data and capture the mean and variance in the series of dependence structure. This paper will investigates the oil price and gold price effects on Malaysia, Singapore, Thailand and Indonesia stock market returns. Two forms of Multivariate Markov switching models are used namely the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model (MSMH-VECM) and the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model with exogenous variable (MSMH-VECMX). The reason for using these two models are to capture the transition probabilities of the data since real financial time series data always exhibit nonlinear properties such as regime switching, cointegrating relations, jumps or breaks passing the time. A comparison between these two models indicates that MSMH-VECM model able to fit the time series data better than the MSMH-VECMX model. In addition, it was found that oil price and gold price affected the stock market changes in the four selected countries.

  13. Derivation of Markov processes that violate detailed balance

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2018-03-01

    Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, H.

    In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications ofmore » the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability).« less

  15. Quantum decision-maker theory and simulation

    NASA Astrophysics Data System (ADS)

    Zak, Michail; Meyers, Ronald E.; Deacon, Keith S.

    2000-07-01

    A quantum device simulating the human decision making process is introduced. It consists of quantum recurrent nets generating stochastic processes which represent the motor dynamics, and of classical neural nets describing the evolution of probabilities of these processes which represent the mental dynamics. The autonomy of the decision making process is achieved by a feedback from the mental to motor dynamics which changes the stochastic matrix based upon the probability distribution. This feedback replaces unavailable external information by an internal knowledge- base stored in the mental model in the form of probability distributions. As a result, the coupled motor-mental dynamics is described by a nonlinear version of Markov chains which can decrease entropy without an external source of information. Applications to common sense based decisions as well as to evolutionary games are discussed. An example exhibiting self-organization is computed using quantum computer simulation. Force on force and mutual aircraft engagements using the quantum decision maker dynamics are considered.

  16. Non-Linear Dynamics of Saturn's Rings

    NASA Astrophysics Data System (ADS)

    Esposito, L. W.

    2015-12-01

    Non-linear processes can explain why Saturn's rings are so active and dynamic. Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity, and allows aggregates to grow. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit, with relative velocity ranging from nearly zero to a multiple of the orbit average: 2-10x is possible. Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like 'straw' that can explain the halo structure and spectroscopy: Cyclic velocity changes cause perturbed regions to reach higher collision speeds at some orbital phases, which preferentially removes small regolith particles; Surrounding particles diffuse back too slowly to erase the effect: this gives the halo morphology; This requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km); We propose 'straw', as observed ny Cassini cameras. Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing. Ring dynamics and history implications: Moon-triggered clumping at perturbed regions in Saturn's rings creates both high velocity dispersion and large aggregates at these distances, explaining both small and large particles observed there. This confirms the triple architecture of ring particles: a broad size distribution of particles; these aggregate into temporary rubble piles; coated by a regolith of dust. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating the Markov chain as an asymmetric random walk with reflecting boundaries allows us to determine the power law index from results of numerical simulations in the tidal environment surrounding Saturn. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn's rings a chaotic non-linear driven system?

  17. An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes

    ERIC Educational Resources Information Center

    Kapland, David

    2008-01-01

    This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…

  18. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach.

    PubMed

    Demidenko, Eugene

    2017-09-01

    The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.

  19. Modeling Battery Behavior on Sensory Operations for Context-Aware Smartphone Sensing

    PubMed Central

    Yurur, Ozgur; Liu, Chi Harold; Moreno, Wilfrido

    2015-01-01

    Energy consumption is a major concern in context-aware smartphone sensing. This paper first studies mobile device-based battery modeling, which adopts the kinetic battery model (KiBaM), under the scope of battery non-linearities with respect to variant loads. Second, this paper models the energy consumption behavior of accelerometers analytically and then provides extensive simulation results and a smartphone application to examine the proposed sensor model. Third, a Markov reward process is integrated to create energy consumption profiles, linking with sensory operations and their effects on battery non-linearity. Energy consumption profiles consist of different pairs of duty cycles and sampling frequencies during sensory operations. Furthermore, the total energy cost by each profile is represented by an accumulated reward in this process. Finally, three different methods are proposed on the evolution of the reward process, to present the linkage between different usage patterns on the accelerometer sensor through a smartphone application and the battery behavior. By doing this, this paper aims at achieving a fine efficiency in power consumption caused by sensory operations, while maintaining the accuracy of smartphone applications based on sensor usages. More importantly, this study intends that modeling the battery non-linearities together with investigating the effects of different usage patterns in sensory operations in terms of the power consumption and the battery discharge may lead to discovering optimal energy reduction strategies to extend the battery lifetime and help a continual improvement in context-aware mobile services. PMID:26016916

  20. Modeling battery behavior on sensory operations for context-aware smartphone sensing.

    PubMed

    Yurur, Ozgur; Liu, Chi Harold; Moreno, Wilfrido

    2015-05-26

    Energy consumption is a major concern in context-aware smartphone sensing. This paper first studies mobile device-based battery modeling, which adopts the kinetic battery model (KiBaM), under the scope of battery non-linearities with respect to variant loads. Second, this paper models the energy consumption behavior of accelerometers analytically and then provides extensive simulation results and a smartphone application to examine the proposed sensor model. Third, a Markov reward process is integrated to create energy consumption profiles, linking with sensory operations and their effects on battery non-linearity. Energy consumption profiles consist of different pairs of duty cycles and sampling frequencies during sensory operations. Furthermore, the total energy cost by each profile is represented by an accumulated reward in this process. Finally, three different methods are proposed on the evolution of the reward process, to present the linkage between different usage patterns on the accelerometer sensor through a smartphone application and the battery behavior. By doing this, this paper aims at achieving a fine efficiency in power consumption caused by sensory operations, while maintaining the accuracy of smartphone applications based on sensor usages. More importantly, this study intends that modeling the battery non-linearities together with investigating the effects of different usage patterns in sensory operations in terms of the power consumption and the battery discharge may lead to discovering optimal energy reduction strategies to extend the battery lifetime and help a continual improvement in context-aware mobile services.

  1. A Bayesian approach to identifying structural nonlinearity using free-decay response: Application to damage detection in composites

    USGS Publications Warehouse

    Nichols, J.M.; Link, W.A.; Murphy, K.D.; Olson, C.C.

    2010-01-01

    This work discusses a Bayesian approach to approximating the distribution of parameters governing nonlinear structural systems. Specifically, we use a Markov Chain Monte Carlo method for sampling the posterior parameter distributions thus producing both point and interval estimates for parameters. The method is first used to identify both linear and nonlinear parameters in a multiple degree-of-freedom structural systems using free-decay vibrations. The approach is then applied to the problem of identifying the location, size, and depth of delamination in a model composite beam. The influence of additive Gaussian noise on the response data is explored with respect to the quality of the resulting parameter estimates.

  2. Poissonian steady states: from stationary densities to stationary intensities.

    PubMed

    Eliazar, Iddo

    2012-10-01

    Markov dynamics are the most elemental and omnipresent form of stochastic dynamics in the sciences, with applications ranging from physics to chemistry, from biology to evolution, and from economics to finance. Markov dynamics can be either stationary or nonstationary. Stationary Markov dynamics represent statistical steady states and are quantified by stationary densities. In this paper, we generalize the notion of steady state to the case of general Markov dynamics. Considering an ensemble of independent motions governed by common Markov dynamics, we establish that the entire ensemble attains Poissonian steady states which are quantified by stationary Poissonian intensities and which hold valid also in the case of nonstationary Markov dynamics. The methodology is applied to a host of Markov dynamics, including Brownian motion, birth-death processes, random walks, geometric random walks, renewal processes, growth-collapse dynamics, decay-surge dynamics, Ito diffusions, and Langevin dynamics.

  3. Poissonian steady states: From stationary densities to stationary intensities

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2012-10-01

    Markov dynamics are the most elemental and omnipresent form of stochastic dynamics in the sciences, with applications ranging from physics to chemistry, from biology to evolution, and from economics to finance. Markov dynamics can be either stationary or nonstationary. Stationary Markov dynamics represent statistical steady states and are quantified by stationary densities. In this paper, we generalize the notion of steady state to the case of general Markov dynamics. Considering an ensemble of independent motions governed by common Markov dynamics, we establish that the entire ensemble attains Poissonian steady states which are quantified by stationary Poissonian intensities and which hold valid also in the case of nonstationary Markov dynamics. The methodology is applied to a host of Markov dynamics, including Brownian motion, birth-death processes, random walks, geometric random walks, renewal processes, growth-collapse dynamics, decay-surge dynamics, Ito diffusions, and Langevin dynamics.

  4. A variable-step-size robust delta modulator.

    NASA Technical Reports Server (NTRS)

    Song, C. L.; Garodnick, J.; Schilling, D. L.

    1971-01-01

    Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.

  5. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audren, Benjamin; Lesgourgues, Julien; Bird, Simeon

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fouriermore » space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.« less

  6. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  7. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    NASA Astrophysics Data System (ADS)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  8. Conditioned Limit Theorems for Some Null Recurrent Markov Processes

    DTIC Science & Technology

    1976-08-01

    Chapter 1 INTRODUCTION 1.1 Summary of Results Let (Vk, k ! 0) be a discrete time Markov process with state space EC(- , ) and let S be...explain our results in some detail. 2 We begin by stating our three basic assumptions: (1) vk s k 2 0 Is a Markov process with state space E C(-o,%); (Ii... 12 n 3. CONDITIONING ON T (, > n.................................1.9 3.1 Preliminary Results

  9. A statistical rain attenuation prediction model with application to the advanced communication technology satellite project. 3: A stochastic rain fade control algorithm for satellite link power via non linear Markow filtering theory

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1991-01-01

    The dynamic and composite nature of propagation impairments that are incurred on Earth-space communications links at frequencies in and above 30/20 GHz Ka band, i.e., rain attenuation, cloud and/or clear air scintillation, etc., combined with the need to counter such degradations after the small link margins have been exceeded, necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) Project by the implementation of optimal processing schemes derived through the use of the Rain Attenuation Prediction Model and nonlinear Markov filtering theory.

  10. Detecting critical state before phase transition of complex systems by hidden Markov model

    NASA Astrophysics Data System (ADS)

    Liu, Rui; Chen, Pei; Li, Yongjun; Chen, Luonan

    Identifying the critical state or pre-transition state just before the occurrence of a phase transition is a challenging task, because the state of the system may show little apparent change before this critical transition during the gradual parameter variations. Such dynamics of phase transition is generally composed of three stages, i.e., before-transition state, pre-transition state, and after-transition state, which can be considered as three different Markov processes. Thus, based on this dynamical feature, we present a novel computational method, i.e., hidden Markov model (HMM), to detect the switching point of the two Markov processes from the before-transition state (a stationary Markov process) to the pre-transition state (a time-varying Markov process), thereby identifying the pre-transition state or early-warning signals of the phase transition. To validate the effectiveness, we apply this method to detect the signals of the imminent phase transitions of complex systems based on the simulated datasets, and further identify the pre-transition states as well as their critical modules for three real datasets, i.e., the acute lung injury triggered by phosgene inhalation, MCF-7 human breast cancer caused by heregulin, and HCV-induced dysplasia and hepatocellular carcinoma.

  11. Modeling Markov switching ARMA-GARCH neural networks models and an application to forecasting stock returns.

    PubMed

    Bildirici, Melike; Ersin, Özgür

    2014-01-01

    The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100). Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray's MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray's MS-GARCH model. Therefore, the models are promising for various economic applications.

  12. Modeling Markov Switching ARMA-GARCH Neural Networks Models and an Application to Forecasting Stock Returns

    PubMed Central

    Bildirici, Melike; Ersin, Özgür

    2014-01-01

    The study has two aims. The first aim is to propose a family of nonlinear GARCH models that incorporate fractional integration and asymmetric power properties to MS-GARCH processes. The second purpose of the study is to augment the MS-GARCH type models with artificial neural networks to benefit from the universal approximation properties to achieve improved forecasting accuracy. Therefore, the proposed Markov-switching MS-ARMA-FIGARCH, APGARCH, and FIAPGARCH processes are further augmented with MLP, Recurrent NN, and Hybrid NN type neural networks. The MS-ARMA-GARCH family and MS-ARMA-GARCH-NN family are utilized for modeling the daily stock returns in an emerging market, the Istanbul Stock Index (ISE100). Forecast accuracy is evaluated in terms of MAE, MSE, and RMSE error criteria and Diebold-Mariano equal forecast accuracy tests. The results suggest that the fractionally integrated and asymmetric power counterparts of Gray's MS-GARCH model provided promising results, while the best results are obtained for their neural network based counterparts. Further, among the models analyzed, the models based on the Hybrid-MLP and Recurrent-NN, the MS-ARMA-FIAPGARCH-HybridMLP, and MS-ARMA-FIAPGARCH-RNN provided the best forecast performances over the baseline single regime GARCH models and further, over the Gray's MS-GARCH model. Therefore, the models are promising for various economic applications. PMID:24977200

  13. Nonlinear stochastic exclusion financial dynamics modeling and time-dependent intrinsic detrended cross-correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Wang, Jun

    2017-09-01

    In attempt to reproduce price dynamics of financial markets, a stochastic agent-based financial price model is proposed and investigated by stochastic exclusion process. The exclusion process, one of interacting particle systems, is usually thought of as modeling particle motion (with the conserved number of particles) in a continuous time Markov process. In this work, the process is utilized to imitate the trading interactions among the investing agents, in order to explain some stylized facts found in financial time series dynamics. To better understand the correlation behaviors of the proposed model, a new time-dependent intrinsic detrended cross-correlation (TDI-DCC) is introduced and performed, also, the autocorrelation analyses are applied in the empirical research. Furthermore, to verify the rationality of the financial price model, the actual return series are also considered to be comparatively studied with the simulation ones. The comparison results of return behaviors reveal that this financial price dynamics model can reproduce some correlation features of actual stock markets.

  14. Markov Modeling of Component Fault Growth over a Derived Domain of Feasible Output Control Effort Modifications

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.

  15. The application of Markov decision process with penalty function in restaurant delivery robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional Markov decision process path planning algorithm is not save, the robot is very close to the table and chairs. To solve this problem, this paper proposes the Markov Decision Process with a penalty term called MDPPT path planning algorithm according to the traditional Markov decision process (MDP). For MDP, if the restaurant delivery robot bumps into an obstacle, the reward it receives is part of the current status reward. For the MDPPT, the reward it receives not only the part of the current status but also a negative constant term. Simulation results show that the MDPPT algorithm can plan a more secure path.

  16. Solving inverse problem for Markov chain model of customer lifetime value using flower pollination algorithm

    NASA Astrophysics Data System (ADS)

    Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji

    2015-12-01

    Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.

  17. Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.

    PubMed

    Dixit, Purushottam D; Dill, Ken A

    2018-02-13

    Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.

  18. Cluster-based control of a separating flow over a smoothly contoured ramp

    NASA Astrophysics Data System (ADS)

    Kaiser, Eurika; Noack, Bernd R.; Spohn, Andreas; Cattafesta, Louis N.; Morzyński, Marek

    2017-12-01

    The ability to manipulate and control fluid flows is of great importance in many scientific and engineering applications. The proposed closed-loop control framework addresses a key issue of model-based control: The actuation effect often results from slow dynamics of strongly nonlinear interactions which the flow reveals at timescales much longer than the prediction horizon of any model. Hence, we employ a probabilistic approach based on a cluster-based discretization of the Liouville equation for the evolution of the probability distribution. The proposed methodology frames high-dimensional, nonlinear dynamics into low-dimensional, probabilistic, linear dynamics which considerably simplifies the optimal control problem while preserving nonlinear actuation mechanisms. The data-driven approach builds upon a state space discretization using a clustering algorithm which groups kinematically similar flow states into a low number of clusters. The temporal evolution of the probability distribution on this set of clusters is then described by a control-dependent Markov model. This Markov model can be used as predictor for the ergodic probability distribution for a particular control law. This probability distribution approximates the long-term behavior of the original system on which basis the optimal control law is determined. We examine how the approach can be used to improve the open-loop actuation in a separating flow dominated by Kelvin-Helmholtz shedding. For this purpose, the feature space, in which the model is learned, and the admissible control inputs are tailored to strongly oscillatory flows.

  19. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    PubMed

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  20. Projection methods for the numerical solution of Markov chain models

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  1. Markovian prediction of future values for food grains in the economic survey

    NASA Astrophysics Data System (ADS)

    Sathish, S.; Khadar Babu, S. K.

    2017-11-01

    Now-a-days prediction and forecasting are plays a vital role in research. For prediction, regression is useful to predict the future value and current value on production process. In this paper, we assume food grain production exhibit Markov chain dependency and time homogeneity. The economic generative performance evaluation the balance time artificial fertilization different level in Estrusdetection using a daily Markov chain model. Finally, Markov process prediction gives better performance compare with Regression model.

  2. Towards robust quantification and reduction of uncertainty in hydrologic predictions: Integration of particle Markov chain Monte Carlo and factorial polynomial chaos expansion

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.

    2017-05-01

    The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.

  3. Diffusion Geometry Based Nonlinear Methods for Hyperspectral Change Detection

    DTIC Science & Technology

    2010-05-12

    for matching biological spectra across a data base of hyperspectral pathology slides acquires with different instruments in different conditions, as...generalizing wavelets and similar scaling mechanisms. Plain Sight Systems, Inc. -7- Proprietary and Confidential To be specific, let the bi-Markov...remarkably well. Conventional nearest neighbor search , compared with a diffusion search. The data is a pathology slide ,each pixel is a digital

  4. Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions

    NASA Astrophysics Data System (ADS)

    Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G.; Hummer, Gerhard

    2014-09-01

    Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.

  5. Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G., E-mail: yannis@princeton.edu, E-mail: gerhard.hummer@biophys.mpg.de

    Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlapmore » with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.« less

  6. Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions

    PubMed Central

    Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G.; Hummer, Gerhard

    2014-01-01

    Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space. PMID:25240340

  7. MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes

    USGS Publications Warehouse

    Williams, B.K.

    1988-01-01

    Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.

  8. Markov and non-Markov processes in complex systems by the dynamical information entropy

    NASA Astrophysics Data System (ADS)

    Yulmetyev, R. M.; Gafarov, F. M.

    1999-12-01

    We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.

  9. Open Quantum Systems and Classical Trajectories

    NASA Astrophysics Data System (ADS)

    Rebolledo, Rolando

    2004-09-01

    A Quantum Markov Semigroup consists of a family { T} = ({ T}t)_{t ∈ B R+} of normal ω*- continuous completely positive maps on a von Neumann algebra 𝔐 which preserve the unit and satisfy the semigroup property. This class of semigroups has been extensively used to represent open quantum systems. This article is aimed at studying the existence of a { T} -invariant abelian subalgebra 𝔄 of 𝔐. When this happens, the restriction of { T}t to 𝔄 defines a classical Markov semigroup T = (Tt)t ∈ ∝ + say, associated to a classical Markov process X = (Xt)t ∈ ∝ +. The structure (𝔄, T, X) unravels the quantum Markov semigroup { T} , providing a bridge between open quantum systems and classical stochastic processes.

  10. Markov chains for testing redundant software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  11. Envelopes of Sets of Measures, Tightness, and Markov Control Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Hernandez, J.; Hernandez-Lerma, O.

    1999-11-15

    We introduce upper and lower envelopes for sets of measures on an arbitrary topological space, which are then used to give a tightness criterion. These concepts are applied to show the existence of optimal policies for a class of Markov control processes.

  12. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium.

    PubMed

    Kapfer, Sebastian C; Krauth, Werner

    2017-12-15

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  13. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    NASA Astrophysics Data System (ADS)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  14. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    NASA Astrophysics Data System (ADS)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  15. VAMPnets for deep learning of molecular kinetics.

    PubMed

    Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank

    2018-01-02

    There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.

  16. Multispectral Image Compression for Improvement of Colorimetric and Spectral Reproducibility by Nonlinear Spectral Transform

    NASA Astrophysics Data System (ADS)

    Yu, Shanshan; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2006-09-01

    The article proposes a multispectral image compression scheme using nonlinear spectral transform for better colorimetric and spectral reproducibility. In the method, we show the reduction of colorimetric error under a defined viewing illuminant and also that spectral accuracy can be improved simultaneously using a nonlinear spectral transform called Labplus, which takes into account the nonlinearity of human color vision. Moreover, we show that the addition of diagonal matrices to Labplus can further preserve the spectral accuracy and has a generalized effect of improving the colorimetric accuracy under other viewing illuminants than the defined one. Finally, we discuss the usage of the first-order Markov model to form the analysis vectors for the higher order channels in Labplus to reduce the computational complexity. We implement a multispectral image compression system that integrates Labplus with JPEG2000 for high colorimetric and spectral reproducibility. Experimental results for a 16-band multispectral image show the effectiveness of the proposed scheme.

  17. Global dynamics of a stochastic neuronal oscillator

    NASA Astrophysics Data System (ADS)

    Yamanobe, Takanobu

    2013-11-01

    Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.

  18. Global dynamics of a stochastic neuronal oscillator.

    PubMed

    Yamanobe, Takanobu

    2013-11-01

    Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.

  19. Predicting Loss-of-Control Boundaries Toward a Piloting Aid

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.

  20. The Markov process admits a consistent steady-state thermodynamic formalism

    NASA Astrophysics Data System (ADS)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  1. The Non-linear Trajectory of Change in Play Profiles of Three Children in Psychodynamic Play Therapy.

    PubMed

    Halfon, Sibel; Çavdar, Alev; Orsucci, Franco; Schiepek, Gunter K; Andreassi, Silvia; Giuliani, Alessandro; de Felice, Giulio

    2016-01-01

    Aim: Even though there is substantial evidence that play based therapies produce significant change, the specific play processes in treatment remain unexamined. For that purpose, processes of change in long-term psychodynamic play therapy are assessed through a repeated systematic assessment of three children's "play profiles," which reflect patterns of organization among play variables that contribute to play activity in therapy, indicative of the children's coping strategies, and an expression of their internal world. The main aims of the study are to investigate the kinds of play profiles expressed in treatment, and to test whether there is emergence of new and more adaptive play profiles using dynamic systems theory as a methodological framework. Methods and Procedures: Each session from the long-term psychodynamic treatment (mean number of sessions = 55) of three 6-year-old good outcome cases presenting with Separation Anxiety were recorded, transcribed and coded using items from the Children's Play Therapy Instrument (CPTI), created to assess the play activity of children in psychotherapy, generating discrete and measurable units of play activity arranged along a continuum of four play profiles: "Adaptive," "Inhibited," "Impulsive," and "Disorganized." The play profiles were clustered through K -means Algorithm, generating seven discrete states characterizing the course of treatment and the transitions between these states were analyzed by Markov Transition Matrix, Recurrence Quantification Analysis (RQA) and odds ratios comparing the first and second halves of psychotherapy. Results: The Markov Transitions between the states scaled almost perfectly and also showed the ergodicity of the system, meaning that the child can reach any state or shift to another one in play. The RQA and odds ratios showed two trends of change, first concerning the decrease in the use of "less adaptive" strategies, second regarding the reduction of play interruptions. Conclusion: The results support that these children express different psychic states in play, which can be captured through the lens of play profiles, and begin to modify less dysfunctional profiles over the course of treatment. The methodology employed showed the productivity of treating psychodynamic play therapy as a complex system, taking advantage of non-linear methods to study psychotherapeutic play activity.

  2. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  3. Dynamical Signatures of Living Systems

    NASA Technical Reports Server (NTRS)

    Zak, M.

    1999-01-01

    One of the main challenges in modeling living systems is to distinguish a random walk of physical origin (for instance, Brownian motions) from those of biological origin and that will constitute the starting point of the proposed approach. As conjectured, the biological random walk must be nonlinear. Indeed, any stochastic Markov process can be described by linear Fokker-Planck equation (or its discretized version), only that type of process has been observed in the inanimate world. However, all such processes always converge to a stable (ergodic or periodic) state, i.e., to the states of a lower complexity and high entropy. At the same time, the evolution of living systems directed toward a higher level of complexity if complexity is associated with a number of structural variations. The simplest way to mimic such a tendency is to incorporate a nonlinearity into the random walk; then the probability evolution will attain the features of diffusion equation: the formation and dissipation of shock waves initiated by small shallow wave disturbances. As a result, the evolution never "dies:" it produces new different configurations which are accompanied by an increase or decrease of entropy (the decrease takes place during formation of shock waves, the increase-during their dissipation). In other words, the evolution can be directed "against the second law of thermodynamics" by forming patterns outside of equilibrium in the probability space. Due to that, a specie is not locked up in a certain pattern of behavior: it still can perform a variety of motions, and only the statistics of these motions is constrained by this pattern. It should be emphasized that such a "twist" is based upon the concept of reflection, i.e., the existence of the self-image (adopted from psychology). The model consists of a generator of stochastic processes which represents the motor dynamics in the form of nonlinear random walks, and a simulator of the nonlinear version of the diffusion equation which represents the mental dynamics. It has been demonstrated that coupled mental-motor dynamics can simulate emerging self-organization, prey-predator games, collaboration and competition, "collective brain," etc.

  4. Predictive Rate-Distortion for Infinite-Order Markov Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2016-06-01

    Predictive rate-distortion analysis suffers from the curse of dimensionality: clustering arbitrarily long pasts to retain information about arbitrarily long futures requires resources that typically grow exponentially with length. The challenge is compounded for infinite-order Markov processes, since conditioning on finite sequences cannot capture all of their past dependencies. Spectral arguments confirm a popular intuition: algorithms that cluster finite-length sequences fail dramatically when the underlying process has long-range temporal correlations and can fail even for processes generated by finite-memory hidden Markov models. We circumvent the curse of dimensionality in rate-distortion analysis of finite- and infinite-order processes by casting predictive rate-distortion objective functions in terms of the forward- and reverse-time causal states of computational mechanics. Examples demonstrate that the resulting algorithms yield substantial improvements.

  5. Many roads to synchrony: natural time scales and their algorithms.

    PubMed

    James, Ryan G; Mahoney, John R; Ellison, Christopher J; Crutchfield, James P

    2014-04-01

    We consider two important time scales-the Markov and cryptic orders-that monitor how an observer synchronizes to a finitary stochastic process. We show how to compute these orders exactly and that they are most efficiently calculated from the ε-machine, a process's minimal unifilar model. Surprisingly, though the Markov order is a basic concept from stochastic process theory, it is not a probabilistic property of a process. Rather, it is a topological property and, moreover, it is not computable from any finite-state model other than the ε-machine. Via an exhaustive survey, we close by demonstrating that infinite Markov and infinite cryptic orders are a dominant feature in the space of finite-memory processes. We draw out the roles played in statistical mechanical spin systems by these two complementary length scales.

  6. Using the Pearson Distribution for Synthesis of the Suboptimal Algorithms for Filtering Multi-Dimensional Markov Processes

    NASA Astrophysics Data System (ADS)

    Mit'kin, A. S.; Pogorelov, V. A.; Chub, E. G.

    2015-08-01

    We consider the method of constructing the suboptimal filter on the basis of approximating the a posteriori probability density of the multidimensional Markov process by the Pearson distributions. The proposed method can efficiently be used for approximating asymmetric, excessive, and finite densities.

  7. Continuum Modeling and Control of Large Nonuniform Wireless Networks via Nonlinear Partial Differential Equations

    DOE PAGES

    Zhang, Yang; Chong, Edwin K. P.; Hannig, Jan; ...

    2013-01-01

    We inmore » troduce a continuum modeling method to approximate a class of large wireless networks by nonlinear partial differential equations (PDEs). This method is based on the convergence of a sequence of underlying Markov chains of the network indexed by N , the number of nodes in the network. As N goes to infinity, the sequence converges to a continuum limit, which is the solution of a certain nonlinear PDE. We first describe PDE models for networks with uniformly located nodes and then generalize to networks with nonuniformly located, and possibly mobile, nodes. Based on the PDE models, we develop a method to control the transmissions in nonuniform networks so that the continuum limit is invariant under perturbations in node locations. This enables the networks to maintain stable global characteristics in the presence of varying node locations.« less

  8. Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models

    NASA Astrophysics Data System (ADS)

    Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael

    2016-06-01

    We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.

  9. A Hybrid Seismic Inversion Method for V P/V S Ratio and Its Application to Gas Identification

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Zhang, Hongbing; Han, Feilong; Xiao, Wei; Shang, Zuoping

    2018-03-01

    The ratio of compressional wave velocity to shear wave velocity (V P/V S ratio) has established itself as one of the most important parameters in identifying gas reservoirs. However, considering that seismic inversion process is highly non-linear and geological conditions encountered may be complex, a direct estimation of V P/V S ratio from pre-stack seismic data remains a challenging task. In this paper, we propose a hybrid seismic inversion method to estimate V P/V S ratio directly. In this method, post- and pre-stack inversions are combined in which the pre-stack inversion for V P/V S ratio is driven by the post-stack inversion results (i.e., V P and density). In particular, the V P/V S ratio is considered as a model parameter and is directly inverted from the pre-stack inversion based on the exact Zoeppritz equation. Moreover, anisotropic Markov random field is employed in order to regularise the inversion process as well as taking care of geological structures (boundaries) information. Aided by the proposed hybrid inversion strategy, the directional weighting coefficients incorporated in the anisotropic Markov random field neighbourhoods are quantitatively calculated by the anisotropic diffusion method. The synthetic test demonstrates the effectiveness of the proposed inversion method. In particular, given low quality of the pre-stack data and high heterogeneity of the target layers in the field data, the proposed inversion method reveals the detailed model of V P/V S ratio that can successfully identify the gas-bearing zones.

  10. Markov Processes: Exploring the Use of Dynamic Visualizations to Enhance Student Understanding

    ERIC Educational Resources Information Center

    Pfannkuch, Maxine; Budgett, Stephanie

    2016-01-01

    Finding ways to enhance introductory students' understanding of probability ideas and theory is a goal of many first-year probability courses. In this article, we explore the potential of a prototype tool for Markov processes using dynamic visualizations to develop in students a deeper understanding of the equilibrium and hitting times…

  11. A Fast Variational Approach for Learning Markov Random Field Language Models

    DTIC Science & Technology

    2015-01-01

    the same distribution as n- gram models, but utilize a non-linear neural network pa- rameterization. NLMs have been shown to produce com- petitive...to either resort to local optimiza- tion methods, such as those used in neural lan- guage models, or work with heavily constrained distributions. In...embeddings learned through neural language models. Central to the language modelling problem is the challenge Proceedings of the 32nd International

  12. Scalable approximate policies for Markov decision process models of hospital elective admissions.

    PubMed

    Zhu, George; Lizotte, Dan; Hoey, Jesse

    2014-05-01

    To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Delay chemical master equation: direct and closed-form solutions

    PubMed Central

    Leier, Andre; Marquez-Lago, Tatiana T.

    2015-01-01

    The stochastic simulation algorithm (SSA) describes the time evolution of a discrete nonlinear Markov process. This stochastic process has a probability density function that is the solution of a differential equation, commonly known as the chemical master equation (CME) or forward-Kolmogorov equation. In the same way that the CME gives rise to the SSA, and trajectories of the latter are exact with respect to the former, trajectories obtained from a delay SSA are exact representations of the underlying delay CME (DCME). However, in contrast to the CME, no closed-form solutions have so far been derived for any kind of DCME. In this paper, we describe for the first time direct and closed solutions of the DCME for simple reaction schemes, such as a single-delayed unimolecular reaction as well as chemical reactions for transcription and translation with delayed mRNA maturation. We also discuss the conditions that have to be met such that such solutions can be derived. PMID:26345616

  14. Delay chemical master equation: direct and closed-form solutions.

    PubMed

    Leier, Andre; Marquez-Lago, Tatiana T

    2015-07-08

    The stochastic simulation algorithm (SSA) describes the time evolution of a discrete nonlinear Markov process. This stochastic process has a probability density function that is the solution of a differential equation, commonly known as the chemical master equation (CME) or forward-Kolmogorov equation. In the same way that the CME gives rise to the SSA, and trajectories of the latter are exact with respect to the former, trajectories obtained from a delay SSA are exact representations of the underlying delay CME (DCME). However, in contrast to the CME, no closed-form solutions have so far been derived for any kind of DCME. In this paper, we describe for the first time direct and closed solutions of the DCME for simple reaction schemes, such as a single-delayed unimolecular reaction as well as chemical reactions for transcription and translation with delayed mRNA maturation. We also discuss the conditions that have to be met such that such solutions can be derived.

  15. Bootstrapping Least Squares Estimates in Biochemical Reaction Networks

    PubMed Central

    Linder, Daniel F.

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  16. Message survival and decision dynamics in a class of reactive complex systems subject to external fields

    NASA Astrophysics Data System (ADS)

    Rodriguez Lucatero, C.; Schaum, A.; Alarcon Ramos, L.; Bernal-Jaquez, R.

    2014-07-01

    In this study, the dynamics of decisions in complex networks subject to external fields are studied within a Markov process framework using nonlinear dynamical systems theory. A mathematical discrete-time model is derived using a set of basic assumptions regarding the convincement mechanisms associated with two competing opinions. The model is analyzed with respect to the multiplicity of critical points and the stability of extinction states. Sufficient conditions for extinction are derived in terms of the convincement probabilities and the maximum eigenvalues of the associated connectivity matrices. The influences of exogenous (e.g., mass media-based) effects on decision behavior are analyzed qualitatively. The current analysis predicts: (i) the presence of fixed-point multiplicity (with a maximum number of four different fixed points), multi-stability, and sensitivity with respect to the process parameters; and (ii) the bounded but significant impact of exogenous perturbations on the decision behavior. These predictions were verified using a set of numerical simulations based on a scale-free network topology.

  17. Operational Markov Condition for Quantum Processes

    NASA Astrophysics Data System (ADS)

    Pollock, Felix A.; Rodríguez-Rosario, César; Frauenheim, Thomas; Paternostro, Mauro; Modi, Kavan

    2018-01-01

    We derive a necessary and sufficient condition for a quantum process to be Markovian which coincides with the classical one in the relevant limit. Our condition unifies all previously known definitions for quantum Markov processes by accounting for all potentially detectable memory effects. We then derive a family of measures of non-Markovianity with clear operational interpretations, such as the size of the memory required to simulate a process or the experimental falsifiability of a Markovian hypothesis.

  18. Feynman-Kac formula for stochastic hybrid systems.

    PubMed

    Bressloff, Paul C

    2017-01-01

    We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.

  19. A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2015-10-01

    In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e. , internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature.

  20. A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI

    PubMed Central

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e., internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature. PMID:27054199

  1. Fuzzy Markov random fields versus chains for multispectral image segmentation.

    PubMed

    Salzenstein, Fabien; Collet, Christophe

    2006-11-01

    This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.

  2. Generalization of Faustmann's Formula for Stochastic Forest Growth and Prices with Markov Decision Process Models

    Treesearch

    Joseph Buongiorno

    2001-01-01

    Faustmann's formula gives the land value, or the forest value of land with trees, under deterministic assumptions regarding future stand growth and prices, over an infinite horizon. Markov decision process (MDP) models generalize Faustmann's approach by recognizing that future stand states and prices are known only as probabilistic distributions. The...

  3. Stochastic theory of nonequilibrium steady states and its applications. Part I

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Juan; Qian, Hong; Qian, Min

    2012-01-01

    The concepts of equilibrium and nonequilibrium steady states are introduced in the present review as mathematical concepts associated with stationary Markov processes. For both discrete stochastic systems with master equations and continuous diffusion processes with Fokker-Planck equations, the nonequilibrium steady state (NESS) is characterized in terms of several key notions which are originated from nonequilibrium physics: time irreversibility, breakdown of detailed balance, free energy dissipation, and positive entropy production rate. After presenting this NESS theory in pedagogically accessible mathematical terms that require only a minimal amount of prerequisites in nonlinear differential equations and the theory of probability, it is applied, in Part I, to two widely studied problems: the stochastic resonance (also known as coherent resonance) and molecular motors (also known as Brownian ratchet). Although both areas have advanced rapidly on their own with a vast amount of literature, the theory of NESS provides them with a unifying mathematical foundation. Part II of this review contains applications of the NESS theory to processes from cellular biochemistry, ranging from enzyme catalyzed reactions, kinetic proofreading, to zeroth-order ultrasensitivity.

  4. Dynamic Bandwidth Provisioning Using Markov Chain Based on RSVP

    DTIC Science & Technology

    2013-09-01

    AUTHOR(S) Yavuz Sagir 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS (ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING...ORGANIZATION REPORT NUMBER 9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS (ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11...is finite or countable. A Markov process is basically a stochastic process in which the past history of the process is irrelevant if the current

  5. Mode identification using stochastic hybrid models with applications to conflict detection and resolution

    NASA Astrophysics Data System (ADS)

    Naseri Kouzehgarani, Asal

    2009-12-01

    Most models of aircraft trajectories are non-linear and stochastic in nature; and their internal parameters are often poorly defined. The ability to model, simulate and analyze realistic air traffic management conflict detection scenarios in a scalable, composable, multi-aircraft fashion is an extremely difficult endeavor. Accurate techniques for aircraft mode detection are critical in order to enable the precise projection of aircraft conflicts, and for the enactment of altitude separation resolution strategies. Conflict detection is an inherently probabilistic endeavor; our ability to detect conflicts in a timely and accurate manner over a fixed time horizon is traded off against the increased human workload created by false alarms---that is, situations that would not develop into an actual conflict, or would resolve naturally in the appropriate time horizon-thereby introducing a measure of probabilistic uncertainty in any decision aid fashioned to assist air traffic controllers. The interaction of the continuous dynamics of the aircraft, used for prediction purposes, with the discrete conflict detection logic gives rise to the hybrid nature of the overall system. The introduction of the probabilistic element, common to decision alerting and aiding devices, places the conflict detection and resolution problem in the domain of probabilistic hybrid phenomena. A hidden Markov model (HMM) has two stochastic components: a finite-state Markov chain and a finite set of output probability distributions. In other words an unobservable stochastic process (hidden) that can only be observed through another set of stochastic processes that generate the sequence of observations. The problem of self separation in distributed air traffic management reduces to the ability of aircraft to communicate state information to neighboring aircraft, as well as model the evolution of aircraft trajectories between communications, in the presence of probabilistic uncertain dynamics as well as partially observable and uncertain data. We introduce the Hybrid Hidden Markov Modeling (HHMM) formalism to enable the prediction of the stochastic aircraft states (and thus, potential conflicts), by combining elements of the probabilistic timed input output automaton and the partially observable Markov decision process frameworks, along with the novel addition of a Markovian scheduler to remove the non-deterministic elements arising from the enabling of several actions simultaneously. Comparisons of aircraft in level, climbing/descending and turning flight are performed, and unknown flight track data is evaluated probabilistically against the tuned model in order to assess the effectiveness of the model in detecting the switch between multiple flight modes for a given aircraft. This also allows for the generation of probabilistic distribution over the execution traces of the hybrid hidden Markov model, which then enables the prediction of the states of aircraft based on partially observable and uncertain data. Based on the composition properties of the HHMM, we study a decentralized air traffic system where aircraft are moving along streams and can perform cruise, accelerate, climb and turn maneuvers. We develop a common decentralized policy for conflict avoidance with spatially distributed agents (aircraft in the sky) and assure its safety properties via correctness proofs.

  6. The Embedding Problem for Markov Models of Nucleotide Substitution

    PubMed Central

    Verbyla, Klara L.; Yap, Von Bing; Pahwa, Anuj; Shao, Yunli; Huttley, Gavin A.

    2013-01-01

    Continuous-time Markov processes are often used to model the complex natural phenomenon of sequence evolution. To make the process of sequence evolution tractable, simplifying assumptions are often made about the sequence properties and the underlying process. The validity of one such assumption, time-homogeneity, has never been explored. Violations of this assumption can be found by identifying non-embeddability. A process is non-embeddable if it can not be embedded in a continuous time-homogeneous Markov process. In this study, non-embeddability was demonstrated to exist when modelling sequence evolution with Markov models. Evidence of non-embeddability was found primarily at the third codon position, possibly resulting from changes in mutation rate over time. Outgroup edges and those with a deeper time depth were found to have an increased probability of the underlying process being non-embeddable. Overall, low levels of non-embeddability were detected when examining individual edges of triads across a diverse set of alignments. Subsequent phylogenetic reconstruction analyses demonstrated that non-embeddability could impact on the correct prediction of phylogenies, but at extremely low levels. Despite the existence of non-embeddability, there is minimal evidence of violations of the local time homogeneity assumption and consequently the impact is likely to be minor. PMID:23935949

  7. Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit

    2016-07-01

    A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.

  8. Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, Rimple; Poirel, Dominique; Pettit, Chris

    2016-07-01

    A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less

  9. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  10. Derivation of regularized Grad's moment system from kinetic equations: modes, ghosts and non-Markov fluxes

    NASA Astrophysics Data System (ADS)

    Karlin, Ilya

    2018-04-01

    Derivation of the dynamic correction to Grad's moment system from kinetic equations (regularized Grad's 13 moment system, or R13) is revisited. The R13 distribution function is found as a superposition of eight modes. Three primary modes, known from the previous derivation (Karlin et al. 1998 Phys. Rev. E 57, 1668-1672. (doi:10.1103/PhysRevE.57.1668)), are extended into the nonlinear parameter domain. Three essentially nonlinear modes are identified, and two ghost modes which do not contribute to the R13 fluxes are revealed. The eight-mode structure of the R13 distribution function implies partition of R13 fluxes into two types of contributions: dissipative fluxes (both linear and nonlinear) and nonlinear streamline convective fluxes. Physical interpretation of the latter non-dissipative and non-local in time effect is discussed. A non-perturbative R13-type solution is demonstrated for a simple Lorentz scattering kinetic model. The results of this study clarify the intrinsic structure of the R13 system. This article is part of the theme issue `Hilbert's sixth problem'.

  11. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  12. An offline approach for output-only Bayesian identification of stochastic nonlinear systems using unscented Kalman filtering

    NASA Astrophysics Data System (ADS)

    Erazo, Kalil; Nagarajaiah, Satish

    2017-06-01

    In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.

  13. Spontaneous Polarization in Bio-organic Materials Studied by Scanning Pyroelectric Microscopy (SPEM) and Second Harmonic Generation Microscopy (SHGM)

    NASA Astrophysics Data System (ADS)

    Putzeys, T.; Wübbenhorst, M.; van der Veen, M. A.

    2015-06-01

    Bio-organic materials such as bones, teeth, and tendon generally show nonlinear optical (Masters and So in Handbook of Biomedical Nonlinear Optical Microscopy, 2008), pyro- and piezoelectric (Fukada and Yasuda in J Phys Soc Jpn 12:1158, 1957) properties, implying a permanent polarization, the presence of which can be rationalized by describing the growth of the sample and the creation of a polar axis according to Markov's theory of stochastic processes (Hulliger in Biophys J 84:3501, 2003; Batagiannis et al. in Curr Opin Solid State Mater Sci 17:107, 2010). Two proven, versatile techniques for probing spontaneous polarization distributions in solids are scanning pyroelectric microscopy (SPEM) and second harmonic generation microscopy (SHGM). The combination of pyroelectric scanning with SHG-microscopy in a single experimental setup leading to complementary pyroelectric and nonlinear optical data is demonstrated, providing us with a more complete image of the polarization in organic materials. Crystals consisting of a known polar and hyperpolarizable material, CNS (4-chloro-4-nitrostilbene) are used as a reference sample, to verify the functionality of the setup, with both SPEM and SHGM images revealing the same polarization domain information. In contrast, feline and human nails exhibit a pyroelectric response, but a second harmonic response is absent for both keratin containing materials, implying that there may be symmetry-allowed SHG, but with very inefficient second harmonophores. This new approach to polarity detection provides additional information on the polar and hyperpolar nature in a variety of (bio) materials.

  14. A Markov Environment-dependent Hurricane Intensity Model and Its Comparison with Multiple Dynamic Models

    NASA Astrophysics Data System (ADS)

    Jing, R.; Lin, N.; Emanuel, K.; Vecchi, G. A.; Knutson, T. R.

    2017-12-01

    A Markov environment-dependent hurricane intensity model (MeHiM) is developed to simulate the climatology of hurricane intensity given the surrounding large-scale environment. The model considers three unobserved discrete states representing respectively storm's slow, moderate, and rapid intensification (and deintensification). Each state is associated with a probability distribution of intensity change. The storm's movement from one state to another, regarded as a Markov chain, is described by a transition probability matrix. The initial state is estimated with a Bayesian approach. All three model components (initial intensity, state transition, and intensity change) are dependent on environmental variables including potential intensity, vertical wind shear, midlevel relative humidity, and ocean mixing characteristics. This dependent Markov model of hurricane intensity shows a significant improvement over previous statistical models (e.g., linear, nonlinear, and finite mixture models) in estimating the distributions of 6-h and 24-h intensity change, lifetime maximum intensity, and landfall intensity, etc. Here we compare MeHiM with various dynamical models, including a global climate model [High-Resolution Forecast-Oriented Low Ocean Resolution model (HiFLOR)], a regional hurricane model (Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model), and a simplified hurricane dynamic model [Coupled Hurricane Intensity Prediction System (CHIPS)] and its newly developed fast simulator. The MeHiM developed based on the reanalysis data is applied to estimate the intensity of simulated storms to compare with the dynamical-model predictions under the current climate. The dependences of hurricanes on the environment under current and future projected climates in the various models will also be compared statistically.

  15. Markovian Interpretations of Dual Retrieval Processes

    PubMed Central

    Gomes, C. F. A.; Nakamura, K.; Reyna, V. F.

    2013-01-01

    A half-century ago, at the dawn of the all-or-none learning era, Estes showed that finite Markov chains supply a tractable, comprehensive framework for discrete-change data of the sort that he envisioned for shifts in conditioning states in stimulus sampling theory. Shortly thereafter, such data rapidly accumulated in many spheres of human learning and animal conditioning, and Estes’ work stimulated vigorous development of Markov models to handle them. A key outcome was that the data of the workhorse paradigms of episodic memory, recognition and recall, proved to be one- and two-stage Markovian, respectively, to close approximations. Subsequently, Markov modeling of recognition and recall all but disappeared from the literature, but it is now reemerging in the wake of dual-process conceptions of episodic memory. In recall, in particular, Markov models are being used to measure two retrieval operations (direct access and reconstruction) and a slave familiarity operation. In the present paper, we develop this family of models and present the requisite machinery for fit evaluation and significance testing. Results are reviewed from selected experiments in which the recall models were used to understand dual memory processes. PMID:24948840

  16. Markov switching multinomial logit model: An application to accident-injury severities.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2009-07-01

    In this study, two-state Markov switching multinomial logit models are proposed for statistical modeling of accident-injury severities. These models assume Markov switching over time between two unobserved states of roadway safety as a means of accounting for potential unobserved heterogeneity. The states are distinct in the sense that in different states accident-severity outcomes are generated by separate multinomial logit processes. To demonstrate the applicability of the approach, two-state Markov switching multinomial logit models are estimated for severity outcomes of accidents occurring on Indiana roads over a four-year time period. Bayesian inference methods and Markov Chain Monte Carlo (MCMC) simulations are used for model estimation. The estimated Markov switching models result in a superior statistical fit relative to the standard (single-state) multinomial logit models for a number of roadway classes and accident types. It is found that the more frequent state of roadway safety is correlated with better weather conditions and that the less frequent state is correlated with adverse weather conditions.

  17. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  18. Phasic Triplet Markov Chains.

    PubMed

    El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2014-11-01

    Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.

  19. A mathematical approach for evaluating Markov models in continuous time without discrete-event simulation.

    PubMed

    van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F

    2013-08-01

    Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.

  20. Observation uncertainty in reversible Markov chains.

    PubMed

    Metzner, Philipp; Weber, Marcus; Schütte, Christof

    2010-09-01

    In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .

  1. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    PubMed

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  2. Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process

    NASA Astrophysics Data System (ADS)

    Migawa, Klaudiusz

    2012-12-01

    The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.

  3. Limiting Distributions of Functionals of Markov Chains.

    DTIC Science & Technology

    1984-08-01

    limiting distributions; periodic * nonhomoger.,!ous Poisson processes . 19 ANS? MACY IConuui oe nonoe’ee if necorglooy and edern thty by block numbers...homogeneous Poisson processes is of interest in itself. The problem considered in this paper is of interest in the theory of partially observable...where we obtain the limiting distribution of the interevent times. Key Words: Markov Chains, Limiting Distributions, Periodic Nonhomogeneous Poisson

  4. Effects of stochastic interest rates in decision making under risk: A Markov decision process model for forest management

    Treesearch

    Mo Zhou; Joseph Buongiorno

    2011-01-01

    Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...

  5. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Treesearch

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  6. An Application of the H-Function to Curve-Fitting and Density Estimation.

    DTIC Science & Technology

    1983-12-01

    equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability

  7. Numerical research of the optimal control problem in the semi-Markov inventory model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorshenin, Andrey K.; Belousov, Vasily V.; Shnourkoff, Peter V.

    2015-03-10

    This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the system’s research and finding the optimal control are presented.

  8. A reward semi-Markov process with memory for wind speed modeling

    NASA Astrophysics Data System (ADS)

    Petroni, F.; D'Amico, G.; Prattico, F.

    2012-04-01

    The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [1] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [3], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.

  9. A fast exact simulation method for a class of Markov jump processes.

    PubMed

    Li, Yao; Hu, Lili

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.

  10. The Non-linear Trajectory of Change in Play Profiles of Three Children in Psychodynamic Play Therapy

    PubMed Central

    Halfon, Sibel; Çavdar, Alev; Orsucci, Franco; Schiepek, Gunter K.; Andreassi, Silvia; Giuliani, Alessandro; de Felice, Giulio

    2016-01-01

    Aim: Even though there is substantial evidence that play based therapies produce significant change, the specific play processes in treatment remain unexamined. For that purpose, processes of change in long-term psychodynamic play therapy are assessed through a repeated systematic assessment of three children’s “play profiles,” which reflect patterns of organization among play variables that contribute to play activity in therapy, indicative of the children’s coping strategies, and an expression of their internal world. The main aims of the study are to investigate the kinds of play profiles expressed in treatment, and to test whether there is emergence of new and more adaptive play profiles using dynamic systems theory as a methodological framework. Methods and Procedures: Each session from the long-term psychodynamic treatment (mean number of sessions = 55) of three 6-year-old good outcome cases presenting with Separation Anxiety were recorded, transcribed and coded using items from the Children’s Play Therapy Instrument (CPTI), created to assess the play activity of children in psychotherapy, generating discrete and measurable units of play activity arranged along a continuum of four play profiles: “Adaptive,” “Inhibited,” “Impulsive,” and “Disorganized.” The play profiles were clustered through K-means Algorithm, generating seven discrete states characterizing the course of treatment and the transitions between these states were analyzed by Markov Transition Matrix, Recurrence Quantification Analysis (RQA) and odds ratios comparing the first and second halves of psychotherapy. Results: The Markov Transitions between the states scaled almost perfectly and also showed the ergodicity of the system, meaning that the child can reach any state or shift to another one in play. The RQA and odds ratios showed two trends of change, first concerning the decrease in the use of “less adaptive” strategies, second regarding the reduction of play interruptions. Conclusion: The results support that these children express different psychic states in play, which can be captured through the lens of play profiles, and begin to modify less dysfunctional profiles over the course of treatment. The methodology employed showed the productivity of treating psychodynamic play therapy as a complex system, taking advantage of non-linear methods to study psychotherapeutic play activity. PMID:27777561

  11. Stochastic Calculus and Differential Equations for Physics and Finance

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2013-02-01

    1. Random variables and probability distributions; 2. Martingales, Markov, and nonstationarity; 3. Stochastic calculus; 4. Ito processes and Fokker-Planck equations; 5. Selfsimilar Ito processes; 6. Fractional Brownian motion; 7. Kolmogorov's PDEs and Chapman-Kolmogorov; 8. Non Markov Ito processes; 9. Black-Scholes, martingales, and Feynman-Katz; 10. Stochastic calculus with martingales; 11. Statistical physics and finance, a brief history of both; 12. Introduction to new financial economics; 13. Statistical ensembles and time series analysis; 14. Econometrics; 15. Semimartingales; References; Index.

  12. [Birth and death process of computer viruses].

    PubMed

    Segawa, Katsunori; Nakano, Tatsuya; Nakata, Kotoko; Hayashi, Yuzuru

    2006-01-01

    The daily variations in the number of computer viruses found attaching to e-mails and the number of accesses to the home page of a national institute in Japan are examined. The power spectral densities (PSD) of the variation in the computer viruses show a time-correlation characteristic of Markov process, but the daily access number does not (identified as white noise). Like biological viruses, the variation in the computer viruses can be described by the birth-and-death model known as a Markov process.

  13. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  14. Refining value-at-risk estimates using a Bayesian Markov-switching GJR-GARCH copula-EVT model.

    PubMed

    Sampid, Marius Galabe; Hasim, Haslifah M; Dai, Hongsheng

    2018-01-01

    In this paper, we propose a model for forecasting Value-at-Risk (VaR) using a Bayesian Markov-switching GJR-GARCH(1,1) model with skewed Student's-t innovation, copula functions and extreme value theory. A Bayesian Markov-switching GJR-GARCH(1,1) model that identifies non-constant volatility over time and allows the GARCH parameters to vary over time following a Markov process, is combined with copula functions and EVT to formulate the Bayesian Markov-switching GJR-GARCH(1,1) copula-EVT VaR model, which is then used to forecast the level of risk on financial asset returns. We further propose a new method for threshold selection in EVT analysis, which we term the hybrid method. Empirical and back-testing results show that the proposed VaR models capture VaR reasonably well in periods of calm and in periods of crisis.

  15. Symbolic Heuristic Search for Factored Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Morris, Robert (Technical Monitor); Feng, Zheng-Zhu; Hansen, Eric A.

    2003-01-01

    We describe a planning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. State abstraction is used to avoid evaluating states individually. Forward search from a start state, guided by an admissible heuristic, is used to avoid evaluating all states. We combine these two approaches in a novel way that exploits symbolic model-checking techniques and demonstrates their usefulness for decision-theoretic planning.

  16. CTPPL: A Continuous Time Probabilistic Programming Language

    DTIC Science & Technology

    2009-07-01

    recent years there has been a flurry of interest in continuous time models, mostly focused on continuous time Bayesian networks ( CTBNs ) [Nodelman, 2007... CTBNs are built on homogenous Markov processes. A homogenous Markov pro- cess is a finite state, continuous time process, consisting of an initial...q1 : xn()] ... Some state transitions can produce emissions. In a CTBN , each variable has a conditional inten- sity matrix Qu for every combination of

  17. Using Markov Decision Processes with Heterogeneous Queueing Systems to Examine Military MEDEVAC Dispatching Policies

    DTIC Science & Technology

    2017-03-23

    Air Force Institute of Technology AFIT Scholar Theses and Dissertations 3-23-2017 Using Markov Decision Processes with Heterogeneous Queueing Systems... TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views expressed in...POLICIES THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology

  18. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  19. Operations and support cost modeling using Markov chains

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  20. Markov chain Monte Carlo techniques and spatial-temporal modelling for medical EIT.

    PubMed

    West, Robert M; Aykroyd, Robert G; Meng, Sha; Williams, Richard A

    2004-02-01

    Many imaging problems such as imaging with electrical impedance tomography (EIT) can be shown to be inverse problems: that is either there is no unique solution or the solution does not depend continuously on the data. As a consequence solution of inverse problems based on measured data alone is unstable, particularly if the mapping between the solution distribution and the measurements is also nonlinear as in EIT. To deliver a practical stable solution, it is necessary to make considerable use of prior information or regularization techniques. The role of a Bayesian approach is therefore of fundamental importance, especially when coupled with Markov chain Monte Carlo (MCMC) sampling to provide information about solution behaviour. Spatial smoothing is a commonly used approach to regularization. In the human thorax EIT example considered here nonlinearity increases the difficulty of imaging, using only boundary data, leading to reconstructions which are often rather too smooth. In particular, in medical imaging the resistivity distribution usually contains substantial jumps at the boundaries of different anatomical regions. With spatial smoothing these boundaries can be masked by blurring. This paper focuses on the medical application of EIT to monitor lung and cardiac function and uses explicit geometric information regarding anatomical structure and incorporates temporal correlation. Some simple properties are assumed known, or at least reliably estimated from separate studies, whereas others are estimated from the voltage measurements. This structural formulation will also allow direct estimation of clinically important quantities, such as ejection fraction and residual capacity, along with assessment of precision.

  1. Influence of credit scoring on the dynamics of Markov chain

    NASA Astrophysics Data System (ADS)

    Galina, Timofeeva

    2015-11-01

    Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.

  2. A compositional framework for reaction networks

    NASA Astrophysics Data System (ADS)

    Baez, John C.; Pollard, Blake S.

    Reaction networks, or equivalently Petri nets, are a general framework for describing processes in which entities of various kinds interact and turn into other entities. In chemistry, where the reactions are assigned ‘rate constants’, any reaction network gives rise to a nonlinear dynamical system called its ‘rate equation’. Here we generalize these ideas to ‘open’ reaction networks, which allow entities to flow in and out at certain designated inputs and outputs. We treat open reaction networks as morphisms in a category. Composing two such morphisms connects the outputs of the first to the inputs of the second. We construct a functor sending any open reaction network to its corresponding ‘open dynamical system’. This provides a compositional framework for studying the dynamics of reaction networks. We then turn to statics: that is, steady state solutions of open dynamical systems. We construct a ‘black-boxing’ functor that sends any open dynamical system to the relation that it imposes between input and output variables in steady states. This extends our earlier work on black-boxing for Markov processes.

  3. Dynamic neutron scattering from conformational dynamics. I. Theory and Markov models

    NASA Astrophysics Data System (ADS)

    Lindner, Benjamin; Yi, Zheng; Prinz, Jan-Hendrik; Smith, Jeremy C.; Noé, Frank

    2013-11-01

    The dynamics of complex molecules can be directly probed by inelastic neutron scattering experiments. However, many of the underlying dynamical processes may exist on similar timescales, which makes it difficult to assign processes seen experimentally to specific structural rearrangements. Here, we show how Markov models can be used to connect structural changes observed in molecular dynamics simulation directly to the relaxation processes probed by scattering experiments. For this, a conformational dynamics theory of dynamical neutron and X-ray scattering is developed, following our previous approach for computing dynamical fingerprints of time-correlation functions [F. Noé, S. Doose, I. Daidone, M. Löllmann, J. Chodera, M. Sauer, and J. Smith, Proc. Natl. Acad. Sci. U.S.A. 108, 4822 (2011)]. Markov modeling is used to approximate the relaxation processes and timescales of the molecule via the eigenvectors and eigenvalues of a transition matrix between conformational substates. This procedure allows the establishment of a complete set of exponential decay functions and a full decomposition into the individual contributions, i.e., the contribution of every atom and dynamical process to each experimental relaxation process.

  4. Markov switching of the electricity supply curve and power prices dynamics

    NASA Astrophysics Data System (ADS)

    Mari, Carlo; Cananà, Lucianna

    2012-02-01

    Regime-switching models seem to well capture the main features of power prices behavior in deregulated markets. In a recent paper, we have proposed an equilibrium methodology to derive electricity prices dynamics from the interplay between supply and demand in a stochastic environment. In particular, assuming that the supply function is described by a power law where the exponent is a two-state strictly positive Markov process, we derived a regime switching dynamics of power prices in which regime switches are induced by transitions between Markov states. In this paper, we provide a dynamical model to describe the random behavior of power prices where the only non-Brownian component of the motion is endogenously introduced by Markov transitions in the exponent of the electricity supply curve. In this context, the stochastic process driving the switching mechanism becomes observable, and we will show that the non-Brownian component of the dynamics induced by transitions from Markov states is responsible for jumps and spikes of very high magnitude. The empirical analysis performed on three Australian markets confirms that the proposed approach seems quite flexible and capable of incorporating the main features of power prices time-series, thus reproducing the first four moments of log-returns empirical distributions in a satisfactory way.

  5. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  6. A fast exact simulation method for a class of Markov jump processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yao, E-mail: yaoli@math.umass.edu; Hu, Lili, E-mail: lilyhu86@gmail.com

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze itsmore » properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.« less

  7. Exact solution of the hidden Markov processes.

    PubMed

    Saakian, David B

    2017-11-01

    We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M-1.

  8. Exact solution of the hidden Markov processes

    NASA Astrophysics Data System (ADS)

    Saakian, David B.

    2017-11-01

    We write a master equation for the distributions related to hidden Markov processes (HMPs) and solve it using a functional equation. Thus the solution of HMPs is mapped exactly to the solution of the functional equation. For a general case the latter can be solved only numerically. We derive an exact expression for the entropy of HMPs. Our expression for the entropy is an alternative to the ones given before by the solution of integral equations. The exact solution is possible because actually the model can be considered as a generalized random walk on a one-dimensional strip. While we give the solution for the two second-order matrices, our solution can be easily generalized for the L values of the Markov process and M values of observables: We should be able to solve a system of L functional equations in the space of dimension M -1 .

  9. OPTIMIZING OBSERVER EFFORT FOR FIELD DETECTION OF REPRODUCTIVE EFFECTS IN BIRDS

    EPA Science Inventory

    Avian nest survival is best viewed as a Markov process with two absorbing states, death and fledging. We present a column-stochastic Markov chain from which all major Mayfield formulations of daily nest-survival can be derived contingent upon the degree of observer knowledge of e...

  10. Dynamic rain fade compensation techniques for the advanced communications technology satellite

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1992-01-01

    The dynamic and composite nature of propagation impairments that are incurred on earth-space communications links at frequencies in and above the 30/20 GHz Ka band necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) project by the implementation of optimal processing schemes derived through the use of the ACTS Rain Attenuation Prediction Model and nonlinear Markov filtering theory. The ACTS Rain Attenuation Prediction Model discerns climatological variations on the order of 0.5 deg in latitude and longitude in the continental U.S. The time-dependent portion of the model gives precise availability predictions for the 'spot beam' links of ACTS. However, the structure of the dynamic portion of the model, which yields performance parameters such as fade duration probabilities, is isomorphic to the state-variable approach of stochastic control theory and is amenable to the design of such statistical fade processing schemes which can be made specific to the particular climatological location at which they are employed.

  11. Stability of uncertain systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Blankenship, G. L.

    1971-01-01

    The asymptotic properties of feedback systems are discussed, containing uncertain parameters and subjected to stochastic perturbations. The approach is functional analytic in flavor and thereby avoids the use of Markov techniques and auxiliary Lyapunov functionals characteristic of the existing work in this area. The results are given for the probability distributions of the accessible signals in the system and are proved using the Prohorov theory of the convergence of measures. For general nonlinear systems, a result similar to the small loop-gain theorem of deterministic stability theory is given. Boundedness is a property of the induced distributions of the signals and not the usual notion of boundedness in norm. For the special class of feedback systems formed by the cascade of a white noise, a sector nonlinearity and convolution operator conditions are given to insure the total boundedness of the overall feedback system.

  12. Markov decision processes in natural resources management: observability and uncertainty

    USGS Publications Warehouse

    Williams, Byron K.

    2015-01-01

    The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.

  13. Space system operations and support cost analysis using Markov chains

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.

    1990-01-01

    This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.

  14. Approximations and Implementations of Nonlinear Filtering Schemes.

    DTIC Science & Technology

    1988-02-01

    17) 0 0 3) P(fn) - (pf)n 4) Pf v0 - (Po <-> dp - (p0 dm is invariant under f (i.e. for all measurable A: (f’l(A)) - p(A) Remark: The Perron - Frobenius ...invariant density of the map f is then nothing else than the fixed point of the Perron - Frobenius operator. The following theorem by Lasota and Yorke [8...transition matrix R is defined. With this construct, the Perron - Frobenius operator is effectively 39 A A . w7 approximated (exact for Markov Maps)by

  15. Application of Markov Models for Analysis of Development of Psychological Characteristics

    ERIC Educational Resources Information Center

    Kuravsky, Lev S.; Malykh, Sergey B.

    2004-01-01

    A technique to study combined influence of environmental and genetic factors on the base of changes in phenotype distributions is presented. Histograms are exploited as base analyzed characteristics. A continuous time, discrete state Markov process with piece-wise constant interstate transition rates is associated with evolution of each histogram.…

  16. Modelling Faculty Replacement Strategies Using a Time-Dependent Finite Markov-Chain Process.

    ERIC Educational Resources Information Center

    Hackett, E. Raymond; Magg, Alexander A.; Carrigan, Sarah D.

    1999-01-01

    Describes the use of a time-dependent Markov-chain model to develop faculty-replacement strategies within a college at a research university. The study suggests that a stochastic modelling approach can provide valuable insight when planning for personnel needs in the immediate (five-to-ten year) future. (MSE)

  17. Cascade heterogeneous face sketch-photo synthesis via dual-scale Markov Network

    NASA Astrophysics Data System (ADS)

    Yao, Saisai; Chen, Zhenxue; Jia, Yunyi; Liu, Chengyun

    2018-03-01

    Heterogeneous face sketch-photo synthesis is an important and challenging task in computer vision, which has widely applied in law enforcement and digital entertainment. According to the different synthesis results based on different scales, this paper proposes a cascade sketch-photo synthesis method via dual-scale Markov Network. Firstly, Markov Network with larger scale is used to synthesise the initial sketches and the local vertical and horizontal neighbour search (LVHNS) method is used to search for the neighbour patches of test patches in training set. Then, the initial sketches and test photos are jointly entered into smaller scale Markov Network. Finally, the fine sketches are obtained after cascade synthesis process. Extensive experimental results on various databases demonstrate the superiority of the proposed method compared with several state-of-the-art methods.

  18. Adiabatic reduction of a model of stochastic gene expression with jump Markov process.

    PubMed

    Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C

    2014-04-01

    This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.

  19. Modeling the coupled return-spread high frequency dynamics of large tick assets

    NASA Astrophysics Data System (ADS)

    Curato, Gianbiagio; Lillo, Fabrizio

    2015-01-01

    Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We present an approach based on the hidden Markov model, also known in econometrics as the Markov switching model, for the dynamics of price changes, where the latent Markov process is described by the transitions between spreads. We then use a finite Markov mixture of logit regressions on past squared price changes to describe temporal dependencies in the dynamics of price changes. The model can thus be seen as a double chain Markov model. We show that the model describes the shape of the price change distribution at different time scales, volatility clustering, and the anomalous decrease of kurtosis. We calibrate our models based on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.

  20. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  1. Developing a statistically powerful measure for quartet tree inference using phylogenetic identities and Markov invariants.

    PubMed

    Sumner, Jeremy G; Taylor, Amelia; Holland, Barbara R; Jarvis, Peter D

    2017-12-01

    Recently there has been renewed interest in phylogenetic inference methods based on phylogenetic invariants, alongside the related Markov invariants. Broadly speaking, both these approaches give rise to polynomial functions of sequence site patterns that, in expectation value, either vanish for particular evolutionary trees (in the case of phylogenetic invariants) or have well understood transformation properties (in the case of Markov invariants). While both approaches have been valued for their intrinsic mathematical interest, it is not clear how they relate to each other, and to what extent they can be used as practical tools for inference of phylogenetic trees. In this paper, by focusing on the special case of binary sequence data and quartets of taxa, we are able to view these two different polynomial-based approaches within a common framework. To motivate the discussion, we present three desirable statistical properties that we argue any invariant-based phylogenetic method should satisfy: (1) sensible behaviour under reordering of input sequences; (2) stability as the taxa evolve independently according to a Markov process; and (3) explicit dependence on the assumption of a continuous-time process. Motivated by these statistical properties, we develop and explore several new phylogenetic inference methods. In particular, we develop a statistically bias-corrected version of the Markov invariants approach which satisfies all three properties. We also extend previous work by showing that the phylogenetic invariants can be implemented in such a way as to satisfy property (3). A simulation study shows that, in comparison to other methods, our new proposed approach based on bias-corrected Markov invariants is extremely powerful for phylogenetic inference. The binary case is of particular theoretical interest as-in this case only-the Markov invariants can be expressed as linear combinations of the phylogenetic invariants. A wider implication of this is that, for models with more than two states-for example DNA sequence alignments with four-state models-we find that methods which rely on phylogenetic invariants are incapable of satisfying all three of the stated statistical properties. This is because in these cases the relevant Markov invariants belong to a class of polynomials independent from the phylogenetic invariants.

  2. An Ensemble-Based Smoother with Retrospectively Updated Weights for Highly Nonlinear Systems

    NASA Technical Reports Server (NTRS)

    Chin, T. M.; Turmon, M. J.; Jewell, J. B.; Ghil, M.

    2006-01-01

    Monte Carlo computational methods have been introduced into data assimilation for nonlinear systems in order to alleviate the computational burden of updating and propagating the full probability distribution. By propagating an ensemble of representative states, algorithms like the ensemble Kalman filter (EnKF) and the resampled particle filter (RPF) rely on the existing modeling infrastructure to approximate the distribution based on the evolution of this ensemble. This work presents an ensemble-based smoother that is applicable to the Monte Carlo filtering schemes like EnKF and RPF. At the minor cost of retrospectively updating a set of weights for ensemble members, this smoother has demonstrated superior capabilities in state tracking for two highly nonlinear problems: the double-well potential and trivariate Lorenz systems. The algorithm does not require retrospective adaptation of the ensemble members themselves, and it is thus suited to a streaming operational mode. The accuracy of the proposed backward-update scheme in estimating non-Gaussian distributions is evaluated by comparison to the more accurate estimates provided by a Markov chain Monte Carlo algorithm.

  3. Markov Analysis of Sleep Dynamics

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.

    2009-05-01

    A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.

  4. Students' Progress throughout Examination Process as a Markov Chain

    ERIC Educational Resources Information Center

    Hlavatý, Robert; Dömeová, Ludmila

    2014-01-01

    The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…

  5. Markov Chains for Investigating and Predicting Migration: A Case from Southwestern China

    NASA Astrophysics Data System (ADS)

    Qin, Bo; Wang, Yiyu; Xu, Haoming

    2018-03-01

    In order to accurately predict the population’s happiness, this paper conducted two demographic surveys on a new district of a city in western China, and carried out a dynamic analysis using related mathematical methods. This paper argues that the migration of migrants in the city will change the pattern of spatial distribution of human resources in the city and thus affect the social and economic development in all districts. The migration status of the population will change randomly with the passage of time, so it can be predicted and analyzed through the Markov process. The Markov process provides the local government and decision-making bureau a valid basis for the dynamic analysis of the mobility of migrants in the city as well as the ways for promoting happiness of local people’s lives.

  6. The Communication Link and Error ANalysis (CLEAN) simulator

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.; Crowe, Shane

    1993-01-01

    During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.

  7. An introduction of Markov chain Monte Carlo method to geochemical inverse problems: Reading melting parameters from REE abundances in abyssal peridotites

    NASA Astrophysics Data System (ADS)

    Liu, Boda; Liang, Yan

    2017-04-01

    Markov chain Monte Carlo (MCMC) simulation is a powerful statistical method in solving inverse problems that arise from a wide range of applications. In Earth sciences applications of MCMC simulations are primarily in the field of geophysics. The purpose of this study is to introduce MCMC methods to geochemical inverse problems related to trace element fractionation during mantle melting. MCMC methods have several advantages over least squares methods in deciphering melting processes from trace element abundances in basalts and mantle rocks. Here we use an MCMC method to invert for extent of melting, fraction of melt present during melting, and extent of chemical disequilibrium between the melt and residual solid from REE abundances in clinopyroxene in abyssal peridotites from Mid-Atlantic Ridge, Central Indian Ridge, Southwest Indian Ridge, Lena Trough, and American-Antarctic Ridge. We consider two melting models: one with exact analytical solution and the other without. We solve the latter numerically in a chain of melting models according to the Metropolis-Hastings algorithm. The probability distribution of inverted melting parameters depends on assumptions of the physical model, knowledge of mantle source composition, and constraints from the REE data. Results from MCMC inversion are consistent with and provide more reliable uncertainty estimates than results based on nonlinear least squares inversion. We show that chemical disequilibrium is likely to play an important role in fractionating LREE in residual peridotites during partial melting beneath mid-ocean ridge spreading centers. MCMC simulation is well suited for more complicated but physically more realistic melting problems that do not have analytical solutions.

  8. An open Markov chain scheme model for a credit consumption portfolio fed by ARIMA and SARMA processes

    NASA Astrophysics Data System (ADS)

    Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.

    2016-06-01

    We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.

  9. On the Mathematical Consequences of Binning Spike Trains.

    PubMed

    Cessac, Bruno; Le Ny, Arnaud; Löcherbach, Eva

    2017-01-01

    We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov process, we show that binning generates a stochastic process that is no longer Markov but is instead a variable-length Markov chain (VLMC) with unbounded memory. We also show that the law of the binned raster is a Gibbs measure in the DLR (Dobrushin-Lanford-Ruelle) sense coined in mathematical statistical mechanics. This allows the derivation of several important consequences on statistical properties of binned spike trains. In particular, we introduce the DLR framework as a natural setting to mathematically formalize anticipation, that is, to tell "how good" our nervous system is at making predictions. In a probabilistic sense, this corresponds to condition a process by its future, and we discuss how binning may affect our conclusions on this ability. We finally comment on the possible consequences of binning in the detection of spurious phase transitions or in the detection of incorrect evidence of criticality.

  10. Estimation in a semi-Markov transformation model

    PubMed Central

    Dabrowska, Dorota M.

    2012-01-01

    Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583

  11. Large deviations and mixing for dissipative PDEs with unbounded random kicks

    NASA Astrophysics Data System (ADS)

    Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.

    2018-02-01

    We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.

  12. Kalman-Filter-Based Orientation Determination Using Inertial/Magnetic Sensors: Observability Analysis and Performance Evaluation

    PubMed Central

    Sabatini, Angelo Maria

    2011-01-01

    In this paper we present a quaternion-based Extended Kalman Filter (EKF) for estimating the three-dimensional orientation of a rigid body. The EKF exploits the measurements from an Inertial Measurement Unit (IMU) that is integrated with a tri-axial magnetic sensor. Magnetic disturbances and gyro bias errors are modeled and compensated by including them in the filter state vector. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system that describes the process of motion tracking by the IMU is observable, namely it may provide sufficient information for performing the estimation task with bounded estimation errors. The observability conditions are that the magnetic field, perturbed by first-order Gauss-Markov magnetic variations, and the gravity vector are not collinear and that the IMU is subject to some angular motions. Computer simulations and experimental testing are presented to evaluate the algorithm performance, including when the observability conditions are critical. PMID:22163689

  13. Quantifying parameter uncertainty in stochastic models using the Box Cox transformation

    NASA Astrophysics Data System (ADS)

    Thyer, Mark; Kuczera, George; Wang, Q. J.

    2002-08-01

    The Box-Cox transformation is widely used to transform hydrological data to make it approximately Gaussian. Bayesian evaluation of parameter uncertainty in stochastic models using the Box-Cox transformation is hindered by the fact that there is no analytical solution for the posterior distribution. However, the Markov chain Monte Carlo method known as the Metropolis algorithm can be used to simulate the posterior distribution. This method properly accounts for the nonnegativity constraint implicit in the Box-Cox transformation. Nonetheless, a case study using the AR(1) model uncovered a practical problem with the implementation of the Metropolis algorithm. The use of a multivariate Gaussian jump distribution resulted in unacceptable convergence behaviour. This was rectified by developing suitable parameter transformations for the mean and variance of the AR(1) process to remove the strong nonlinear dependencies with the Box-Cox transformation parameter. Applying this methodology to the Sydney annual rainfall data and the Burdekin River annual runoff data illustrates the efficacy of these parameter transformations and demonstrate the value of quantifying parameter uncertainty.

  14. Optimal satisfaction degree in energy harvesting cognitive radio networks

    NASA Astrophysics Data System (ADS)

    Li, Zan; Liu, Bo-Yang; Si, Jiang-Bo; Zhou, Fu-Hui

    2015-12-01

    A cognitive radio (CR) network with energy harvesting (EH) is considered to improve both spectrum efficiency and energy efficiency. A hidden Markov model (HMM) is used to characterize the imperfect spectrum sensing process. In order to maximize the whole satisfaction degree (WSD) of the cognitive radio network, a tradeoff between the average throughput of the secondary user (SU) and the interference to the primary user (PU) is analyzed. We formulate the satisfaction degree optimization problem as a mixed integer nonlinear programming (MINLP) problem. The satisfaction degree optimization problem is solved by using differential evolution (DE) algorithm. The proposed optimization problem allows the network to adaptively achieve the optimal solution based on its required quality of service (Qos). Numerical results are given to verify our analysis. Project supported by the National Natural Science Foundation of China (Grant No. 61301179), the Doctorial Programs Foundation of the Ministry of Education of China (Grant No. 20110203110011), and the 111 Project (Grant No. B08038).

  15. FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.

    2016-09-01

    We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theorymore » and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.« less

  16. A variational method for analyzing limit cycle oscillations in stochastic hybrid systems

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.; MacLaurin, James

    2018-06-01

    Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .

  17. Bayesian spatiotemporal analysis of zero-inflated biological population density data by a delta-normal spatiotemporal additive model.

    PubMed

    Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco

    2016-03-01

    We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE PAGES

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  19. Analysis of single-molecule fluorescence spectroscopic data with a Markov-modulated Poisson process.

    PubMed

    Jäger, Mark; Kiel, Alexander; Herten, Dirk-Peter; Hamprecht, Fred A

    2009-10-05

    We present a photon-by-photon analysis framework for the evaluation of data from single-molecule fluorescence spectroscopy (SMFS) experiments using a Markov-modulated Poisson process (MMPP). A MMPP combines a discrete (and hidden) Markov process with an additional Poisson process reflecting the observation of individual photons. The algorithmic framework is used to automatically analyze the dynamics of the complex formation and dissociation of Cu2+ ions with the bidentate ligand 2,2'-bipyridine-4,4'dicarboxylic acid in aqueous media. The process of association and dissociation of Cu2+ ions is monitored with SMFS. The dcbpy-DNA conjugate can exist in two or more distinct states which influence the photon emission rates. The advantage of a photon-by-photon analysis is that no information is lost in preprocessing steps. Different model complexities are investigated in order to best describe the recorded data and to determine transition rates on a photon-by-photon basis. The main strength of the method is that it allows to detect intermittent phenomena which are masked by binning and that are difficult to find using correlation techniques when they are short-lived.

  20. Markov Chains For Testing Redundant Software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  1. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  2. Copula-based prediction of economic movements

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Hirsh, I. D.

    2016-06-01

    In this paper we model the discretized returns of two paired time series BM&FBOVESPA Dividend Index and BM&FBOVESPA Public Utilities Index using multivariate Markov models. The discretization corresponds to three categories, high losses, high profits and the complementary periods of the series. In technical terms, the maximal memory that can be considered for a Markov model, can be derived from the size of the alphabet and dataset. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination, of the partitions corresponding to the two marginal processes and the partition corresponding to the multivariate Markov chain. In order to estimate the transition probabilities, all the partitions are linked using a copula. In our application this strategy provides a significant improvement in the movement predictions.

  3. Stochastic-shielding approximation of Markov chains and its application to efficiently simulate random ion-channel gating.

    PubMed

    Schmandt, Nicolaus T; Galán, Roberto F

    2012-09-14

    Markov chains provide realistic models of numerous stochastic processes in nature. We demonstrate that in any Markov chain, the change in occupation number in state A is correlated to the change in occupation number in state B if and only if A and B are directly connected. This implies that if we are only interested in state A, fluctuations in B may be replaced with their mean if state B is not directly connected to A, which shortens computing time considerably. We show the accuracy and efficacy of our approximation theoretically and in simulations of stochastic ion-channel gating in neurons.

  4. Combination of Markov state models and kinetic networks for the analysis of molecular dynamics simulations of peptide folding.

    PubMed

    Radford, Isolde H; Fersht, Alan R; Settanni, Giovanni

    2011-06-09

    Atomistic molecular dynamics simulations of the TZ1 beta-hairpin peptide have been carried out using an implicit model for the solvent. The trajectories have been analyzed using a Markov state model defined on the projections along two significant observables and a kinetic network approach. The Markov state model allowed for an unbiased identification of the metastable states of the system, and provided the basis for commitment probability calculations performed on the kinetic network. The kinetic network analysis served to extract the main transition state for folding of the peptide and to validate the results from the Markov state analysis. The combination of the two techniques allowed for a consistent and concise characterization of the dynamics of the peptide. The slowest relaxation process identified is the exchange between variably folded and denatured species, and the second slowest process is the exchange between two different subsets of the denatured state which could not be otherwise identified by simple inspection of the projected trajectory. The third slowest process is the exchange between a fully native and a partially folded intermediate state characterized by a native turn with a proximal backbone H-bond, and frayed side-chain packing and termini. The transition state for the main folding reaction is similar to the intermediate state, although a more native like side-chain packing is observed.

  5. Non-Linear Dynamics of Saturn's Rings

    NASA Astrophysics Data System (ADS)

    Esposito, L. W.

    2016-12-01

    Non-linear processes can explain why Saturn's rings are so active and dynamic. Ring systems differ from simple linear systems in two significant ways: 1. They are systems of granular material: where particle-to-particle collisions dominate; thus a kinetic, not a fluid description needed. Stresses are strikingly inhomogeneous and fluctuations are large compared to equilibrium. 2. They are strongly forced by resonances: which drive a non-linear response, that push the system across thresholds that lead to persistent states. Some of this non-linearity is captured in a simple Predator-Prey Model: Periodic forcing from the moon causes streamline crowding; This damps the relative velocity. About a quarter phase later, the aggregates stir the system to higher relative velocity and the limit cycle repeats each orbit, with relative velocity ranging from nearly zero to a multiple of the orbit average. Summary of Halo Results: A predator-prey model for ring dynamics produces transient structures like `straw' that can explain the halo morphology and spectroscopy: Cyclic velocity changes cause perturbed regions to reach higher collision speeds at some orbital phases, which preferentially removes small regolith particles; surrounding particles diffuse back too slowly to erase the effect: this gives the halo morphology; this requires energetic collisions (v ≈ 10m/sec, with throw distances about 200km, implying objects of scale R ≈ 20km).Transform to Duffing Eqn : With the coordinate transformation, z = M2/3, the Predator-Prey equations can be combined to form a single second-order differential equation with harmonic resonance forcing.Ring dynamics and history implications: Moon-triggered clumping explains both small and large particles at resonances. We calculate the stationary size distribution using a cell-to-cell mapping procedure that converts the phase-plane trajectories to a Markov chain. Approximating it as an asymmetric random walk with reflecting boundaries determines the power law index, using results of numerical simulations in the tidal environment. Aggregates can explain many dynamic aspects of the rings and can renew rings by shielding and recycling the material within them, depending on how long the mass is sequestered. We can ask: Are Saturn's rings a chaotic non-linear driven system?

  6. Markov Tracking for Agent Coordination

    NASA Technical Reports Server (NTRS)

    Washington, Richard; Lau, Sonie (Technical Monitor)

    1998-01-01

    Partially observable Markov decision processes (POMDPs) axe an attractive representation for representing agent behavior, since they capture uncertainty in both the agent's state and its actions. However, finding an optimal policy for POMDPs in general is computationally difficult. In this paper we present Markov Tracking, a restricted problem of coordinating actions with an agent or process represented as a POMDP Because the actions coordinate with the agent rather than influence its behavior, the optimal solution to this problem can be computed locally and quickly. We also demonstrate the use of the technique on sequential POMDPs, which can be used to model a behavior that follows a linear, acyclic trajectory through a series of states. By imposing a "windowing" restriction that restricts the number of possible alternatives considered at any moment to a fixed size, a coordinating action can be calculated in constant time, making this amenable to coordination with complex agents.

  7. Stochastic Dynamics through Hierarchically Embedded Markov Chains

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Vítor V.; Santos, Fernando P.; Santos, Francisco C.; Pacheco, Jorge M.

    2017-02-01

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects—such as mutations in evolutionary dynamics and a random exploration of choices in social systems—including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  8. Constructing 1/omegaalpha noise from reversible Markov chains.

    PubMed

    Erland, Sveinung; Greenwood, Priscilla E

    2007-09-01

    This paper gives sufficient conditions for the output of 1/omegaalpha noise from reversible Markov chains on finite state spaces. We construct several examples exhibiting this behavior in a specified range of frequencies. We apply simple representations of the covariance function and the spectral density in terms of the eigendecomposition of the probability transition matrix. The results extend to hidden Markov chains. We generalize the results for aggregations of AR1-processes of C. W. J. Granger [J. Econometrics 14, 227 (1980)]. Given the eigenvalue function, there is a variety of ways to assign values to the states such that the 1/omegaalpha condition is satisfied. We show that a random walk on a certain state space is complementary to the point process model of 1/omega noise of B. Kaulakys and T. Meskauskas [Phys. Rev. E 58, 7013 (1998)]. Passing to a continuous state space, we construct 1/omegaalpha noise which also has a long memory.

  9. Stochastic Dynamics through Hierarchically Embedded Markov Chains.

    PubMed

    Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M

    2017-02-03

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  10. Reduced equations of motion for quantum systems driven by diffusive Markov processes.

    PubMed

    Sarovar, Mohan; Grace, Matthew D

    2012-09-28

    The expansion of a stochastic Liouville equation for the coupled evolution of a quantum system and an Ornstein-Uhlenbeck process into a hierarchy of coupled differential equations is a useful technique that simplifies the simulation of stochastically driven quantum systems. We expand the applicability of this technique by completely characterizing the class of diffusive Markov processes for which a useful hierarchy of equations can be derived. The expansion of this technique enables the examination of quantum systems driven by non-Gaussian stochastic processes with bounded range. We present an application of this extended technique by simulating Stark-tuned Förster resonance transfer in Rydberg atoms with nonperturbative position fluctuations.

  11. A Langevin equation for the rates of currency exchange based on the Markov analysis

    NASA Astrophysics Data System (ADS)

    Farahpour, F.; Eskandari, Z.; Bahraminasab, A.; Jafari, G. R.; Ghasemi, F.; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2007-11-01

    We propose a method for analyzing the data for the rates of exchange of various currencies versus the U.S. dollar. The method analyzes the return time series of the data as a Markov process, and develops an effective equation which reconstructs it. We find that the Markov time scale, i.e., the time scale over which the data are Markov-correlated, is one day for the majority of the daily exchange rates that we analyze. We derive an effective Langevin equation to describe the fluctuations in the rates. The equation contains two quantities, D and D, representing the drift and diffusion coefficients, respectively. We demonstrate how the two coefficients are estimated directly from the data, without using any assumptions or models for the underlying stochastic time series that represent the daily rates of exchange of various currencies versus the U.S. dollar.

  12. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  13. Mathematical model of the loan portfolio dynamics in the form of Markov chain considering the process of new customers attraction

    NASA Astrophysics Data System (ADS)

    Bozhalkina, Yana

    2017-12-01

    Mathematical model of the loan portfolio structure change in the form of Markov chain is explored. This model considers in one scheme both the process of customers attraction, their selection based on the credit score, and loans repayment. The model describes the structure and volume of the loan portfolio dynamics, which allows to make medium-term forecasts of profitability and risk. Within the model corrective actions of bank management in order to increase lending volumes or to reduce the risk are formalized.

  14. A Bayesian model for visual space perception

    NASA Technical Reports Server (NTRS)

    Curry, R. E.

    1972-01-01

    A model for visual space perception is proposed that contains desirable features in the theories of Gibson and Brunswik. This model is a Bayesian processor of proximal stimuli which contains three important elements: an internal model of the Markov process describing the knowledge of the distal world, the a priori distribution of the state of the Markov process, and an internal model relating state to proximal stimuli. The universality of the model is discussed and it is compared with signal detection theory models. Experimental results of Kinchla are used as a special case.

  15. On spatial mutation-selection models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kondratiev, Yuri, E-mail: kondrat@math.uni-bielefeld.de; Kutoviy, Oleksandr, E-mail: kutoviy@math.uni-bielefeld.de, E-mail: kutovyi@mit.edu; Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139

    2013-11-15

    We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.

  16. Saccade selection when reward probability is dynamically manipulated using Markov chains

    PubMed Central

    Lovejoy, Lee P.; Krauzlis, Richard J.

    2012-01-01

    Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200–600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection. PMID:18330552

  17. Saccade selection when reward probability is dynamically manipulated using Markov chains.

    PubMed

    Nummela, Samuel U; Lovejoy, Lee P; Krauzlis, Richard J

    2008-05-01

    Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200-600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection.

  18. Indexed semi-Markov process for wind speed modeling.

    NASA Astrophysics Data System (ADS)

    Petroni, F.; D'Amico, G.; Prattico, F.

    2012-04-01

    The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [1] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [3], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. In a previous work we proposed different semi-Markov models, showing their ability to reproduce the autocorrelation structures of wind speed data. In that paper we showed also that the autocorrelation is higher with respect to the Markov model. Unfortunately this autocorrelation was still too small compared to the empirical one. In order to overcome the problem of low autocorrelation, in this paper we propose an indexed semi-Markov model. More precisely we assume that wind speed is described by a discrete time homogeneous semi-Markov process. We introduce a memory index which takes into account the periods of different wind activities. With this model the statistical characteristics of wind speed are faithfully reproduced. The wind is a very unstable phenomenon characterized by a sequence of lulls and sustained speeds, and a good wind generator must be able to reproduce such sequences. To check the validity of the predictive semi-Markovian model, the persistence of synthetic winds were calculated, then averaged and computed. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic generation of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Renewable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribution, Renewable Energy 28 (2003) 1787-1802.

  19. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task.

    PubMed

    Kinjo, Ken; Uchibe, Eiji; Doya, Kenji

    2013-01-01

    Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

  20. An abstract specification language for Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1985-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  1. An abstract language for specifying Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1986-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  2. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    ERIC Educational Resources Information Center

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  3. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    PubMed

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  4. Modeling dyadic processes using Hidden Markov Models: A time series approach to mother-infant interactions during infant immunization.

    PubMed

    Stifter, Cynthia A; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed.

  5. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  6. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  7. Modeling dyadic processes using Hidden Markov Models: A time series approach to mother-infant interactions during infant immunization

    PubMed Central

    Stifter, Cynthia A.; Rovine, Michael

    2016-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed. PMID:27284272

  8. Multiscale hidden Markov models for photon-limited imaging

    NASA Astrophysics Data System (ADS)

    Nowak, Robert D.

    1999-06-01

    Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.

  9. Computer modeling of lung cancer diagnosis-to-treatment process

    PubMed Central

    Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U.; Yu, Xinhua; Faris, Nick

    2015-01-01

    We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed. PMID:26380181

  10. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. First and second order semi-Markov chains for wind speed modeling

    NASA Astrophysics Data System (ADS)

    Prattico, F.; Petroni, F.; D'Amico, G.

    2012-04-01

    The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [3] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [1], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. Semi-Markov processes (SMP) are a wide class of stochastic processes which generalize at the same time both Markov chains and renewal processes. Their main advantage is that of using whatever type of waiting time distribution for modeling the time to have a transition from one state to another one. This major flexibility has a price to pay: availability of data to estimate the parameters of the model which are more numerous. Data availability is not an issue in wind speed studies, therefore, semi-Markov models can be used in a statistical efficient way. In this work we present three different semi-Markov chain models: the first one is a first-order SMP where the transition probabilities from two speed states (at time Tn and Tn-1) depend on the initial state (the state at Tn-1), final state (the state at Tn) and on the waiting time (given by t=Tn-Tn-1), the second model is a second order SMP where we consider the transition probabilities as depending also on the state the wind speed was before the initial state (which is the state at Tn-2) and the last one is still a second order SMP where the transition probabilities depends on the three states at Tn-2,Tn-1 and Tn and on the waiting times t_1=Tn-1-Tn-2 and t_2=Tn-Tn-1. The three models are used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribution, Renewable Energy, 28/2003 1787-1802. [2] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic generation of wind speed time series, Energy 30/2005 693-708. [3] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Renewable Energy 29/2004, 1407-1418.

  12. On the Limiting Markov Process of Energy Exchanges in a Rarely Interacting Ball-Piston Gas

    NASA Astrophysics Data System (ADS)

    Bálint, Péter; Gilbert, Thomas; Nándori, Péter; Szász, Domokos; Tóth, Imre Péter

    2017-02-01

    We analyse the process of energy exchanges generated by the elastic collisions between a point-particle, confined to a two-dimensional cell with convex boundaries, and a `piston', i.e. a line-segment, which moves back and forth along a one-dimensional interval partially intersecting the cell. This model can be considered as the elementary building block of a spatially extended high-dimensional billiard modeling heat transport in a class of hybrid materials exhibiting the kinetics of gases and spatial structure of solids. Using heuristic arguments and numerical analysis, we argue that, in a regime of rare interactions, the billiard process converges to a Markov jump process for the energy exchanges and obtain the expression of its generator.

  13. Collective effect of personal behavior induced preventive measures and differential rate of transmission on spread of epidemics

    NASA Astrophysics Data System (ADS)

    Sagar, Vikram; Zhao, Yi

    2017-02-01

    In the present work, the effect of personal behavior induced preventive measures is studied on the spread of epidemics over scale free networks that are characterized by the differential rate of disease transmission. The role of personal behavior induced preventive measures is parameterized in terms of variable λ, which modulates the number of concurrent contacts a node makes with the fraction of its neighboring nodes. The dynamics of the disease is described by a non-linear Susceptible Infected Susceptible model based upon the discrete time Markov Chain method. The network mean field approach is generalized to account for the effect of non-linear coupling between the aforementioned factors on the collective dynamics of nodes. The upper bound estimates of the disease outbreak threshold obtained from the mean field theory are found to be in good agreement with the corresponding non-linear stochastic model. From the results of parametric study, it is shown that the epidemic size has inverse dependence on the preventive measures (λ). It has also been shown that the increase in the average degree of the nodes lowers the time of spread and enhances the size of epidemics.

  14. Theory and Applications of Weakly Interacting Markov Processes

    DTIC Science & Technology

    2018-02-03

    Moderate deviation principles for stochastic dynamical systems. Boston University, Math Colloquium, March 27, 2015. • Moderate Deviation Principles for...Markov chain approximation method. Submitted. [8] E. Bayraktar and M. Ludkovski. Optimal trade execution in illiquid markets. Math . Finance, 21(4):681...701, 2011. [9] E. Bayraktar and M. Ludkovski. Liquidation in limit order books with controlled intensity. Math . Finance, 24(4):627–650, 2014. [10] P.D

  15. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  16. Semi-Markov Models for Degradation-Based Reliability

    DTIC Science & Technology

    2010-01-01

    standard analysis techniques for Markov processes can be employed (cf. Whitt (1984), Altiok (1985), Perros (1994), and Osogami and Harchol-Balter...We want to approximate X by a PH random variable, sayY, with c.d.f. Ĥ. Marie (1980), Altiok (1985), Johnson (1993), Perros (1994), and Osogami and...provides a minimal representation when matching only two moments. By considering the guidance provided by Marie (1980), Whitt (1984), Altiok (1985), Perros

  17. Semi-Markov Approach to the Shipping Safety Modelling

    NASA Astrophysics Data System (ADS)

    Guze, Sambor; Smolarek, Leszek

    2012-02-01

    In the paper the navigational safety model of a ship on the open area has been studied under conditions of incomplete information. Moreover the structure of semi-Markov processes is used to analyse the stochastic ship safety according to the subjective acceptance of risk by the navigator. In addition, the navigator’s behaviour can be analysed by using the numerical simulation to estimate the probability of collision in the safety model.

  18. Markov Decision Process Measurement Model.

    PubMed

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  19. A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes

    NASA Technical Reports Server (NTRS)

    Carpenter, Russell; Lee, Taesul

    2008-01-01

    Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.

  20. Adaptiveness in monotone pseudo-Boolean optimization and stochastic neural computation.

    PubMed

    Grossi, Giuliano

    2009-08-01

    Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph theory.

  1. BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs

    PubMed Central

    Eklund, Anders; Dufort, Paul; Villani, Mattias; LaConte, Stephen

    2014-01-01

    Analysis of functional magnetic resonance imaging (fMRI) data is becoming ever more computationally demanding as temporal and spatial resolutions improve, and large, publicly available data sets proliferate. Moreover, methodological improvements in the neuroimaging pipeline, such as non-linear spatial normalization, non-parametric permutation tests and Bayesian Markov Chain Monte Carlo approaches, can dramatically increase the computational burden. Despite these challenges, there do not yet exist any fMRI software packages which leverage inexpensive and powerful graphics processing units (GPUs) to perform these analyses. Here, we therefore present BROCCOLI, a free software package written in OpenCL (Open Computing Language) that can be used for parallel analysis of fMRI data on a large variety of hardware configurations. BROCCOLI has, for example, been tested with an Intel CPU, an Nvidia GPU, and an AMD GPU. These tests show that parallel processing of fMRI data can lead to significantly faster analysis pipelines. This speedup can be achieved on relatively standard hardware, but further, dramatic speed improvements require only a modest investment in GPU hardware. BROCCOLI (running on a GPU) can perform non-linear spatial normalization to a 1 mm3 brain template in 4–6 s, and run a second level permutation test with 10,000 permutations in about a minute. These non-parametric tests are generally more robust than their parametric counterparts, and can also enable more sophisticated analyses by estimating complicated null distributions. Additionally, BROCCOLI includes support for Bayesian first-level fMRI analysis using a Gibbs sampler. The new software is freely available under GNU GPL3 and can be downloaded from github (https://github.com/wanderine/BROCCOLI/). PMID:24672471

  2. Constructing 1/ωα noise from reversible Markov chains

    NASA Astrophysics Data System (ADS)

    Erland, Sveinung; Greenwood, Priscilla E.

    2007-09-01

    This paper gives sufficient conditions for the output of 1/ωα noise from reversible Markov chains on finite state spaces. We construct several examples exhibiting this behavior in a specified range of frequencies. We apply simple representations of the covariance function and the spectral density in terms of the eigendecomposition of the probability transition matrix. The results extend to hidden Markov chains. We generalize the results for aggregations of AR1-processes of C. W. J. Granger [J. Econometrics 14, 227 (1980)]. Given the eigenvalue function, there is a variety of ways to assign values to the states such that the 1/ωα condition is satisfied. We show that a random walk on a certain state space is complementary to the point process model of 1/ω noise of B. Kaulakys and T. Meskauskas [Phys. Rev. E 58, 7013 (1998)]. Passing to a continuous state space, we construct 1/ωα noise which also has a long memory.

  3. Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions

    NASA Astrophysics Data System (ADS)

    Güler, Marifi

    2017-10-01

    Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.

  4. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk

    PubMed Central

    Wei, Shaoceng; Kryscio, Richard J.

    2015-01-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk (Figure 1). Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript we apply a Semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. PMID:24821001

  5. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk.

    PubMed

    Wei, Shaoceng; Kryscio, Richard J

    2016-12-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.

  6. Bayesian parameter estimation for nonlinear modelling of biological pathways.

    PubMed

    Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang

    2011-01-01

    The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.

  7. Large Deviations for Stationary Probabilities of a Family of Continuous Time Markov Chains via Aubry-Mather Theory

    NASA Astrophysics Data System (ADS)

    Lopes, Artur O.; Neumann, Adriana

    2015-05-01

    In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.

  8. Discrete time Markov chains (DTMC) susceptible infected susceptible (SIS) epidemic model with two pathogens in two patches

    NASA Astrophysics Data System (ADS)

    Lismawati, Eka; Respatiwulan; Widyaningsih, Purnami

    2017-06-01

    The SIS epidemic model describes the pattern of disease spread with characteristics that recovered individuals can be infected more than once. The number of susceptible and infected individuals every time follows the discrete time Markov process. It can be represented by the discrete time Markov chains (DTMC) SIS. The DTMC SIS epidemic model can be developed for two pathogens in two patches. The aims of this paper are to reconstruct and to apply the DTMC SIS epidemic model with two pathogens in two patches. The model was presented as transition probabilities. The application of the model obtain that the number of susceptible individuals decreases while the number of infected individuals increases for each pathogen in each patch.

  9. An 'adding' algorithm for the Markov chain formalism for radiation transfer

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.

    1979-01-01

    An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.

  10. A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca

    2014-02-15

    Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less

  11. Revisiting Temporal Markov Chains for Continuum modeling of Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Delgoshaie, A. H.; Jenny, P.; Tchelepi, H.

    2017-12-01

    The transport of fluids in porous media is dominated by flow­-field heterogeneity resulting from the underlying permeability field. Due to the high uncertainty in the permeability field, many realizations of the reference geological model are used to describe the statistics of the transport phenomena in a Monte Carlo (MC) framework. There has been strong interest in working with stochastic formulations of the transport that are different from the standard MC approach. Several stochastic models based on a velocity process for tracer particle trajectories have been proposed. Previous studies have shown that for high variances of the log-conductivity, the stochastic models need to account for correlations between consecutive velocity transitions to predict dispersion accurately. The correlated velocity models proposed in the literature can be divided into two general classes of temporal and spatial Markov models. Temporal Markov models have been applied successfully to tracer transport in both the longitudinal and transverse directions. These temporal models are Stochastic Differential Equations (SDEs) with very specific drift and diffusion terms tailored for a specific permeability correlation structure. The drift and diffusion functions devised for a certain setup would not necessarily be suitable for a different scenario, (e.g., a different permeability correlation structure). The spatial Markov models are simple discrete Markov chains that do not require case specific assumptions. However, transverse spreading of contaminant plumes has not been successfully modeled with the available correlated spatial models. Here, we propose a temporal discrete Markov chain to model both the longitudinal and transverse dispersion in a two-dimensional domain. We demonstrate that these temporal Markov models are valid for different correlation structures without modification. Similar to the temporal SDEs, the proposed model respects the limited asymptotic transverse spreading of the plume in two-dimensional problems.

  12. Exploring the WTI crude oil price bubble process using the Markov regime switching model

    NASA Astrophysics Data System (ADS)

    Zhang, Yue-Jun; Wang, Jing

    2015-03-01

    The sharp volatility of West Texas Intermediate (WTI) crude oil price in the past decade triggers us to investigate the price bubbles and their evolving process. Empirical results indicate that the fundamental price of WTI crude oil appears relatively more stable than that of the market-trading price, which verifies the existence of oil price bubbles during the sample period. Besides, by allowing the WTI crude oil price bubble process to switch between two states (regimes) according to a first-order Markov chain, we are able to statistically discriminate upheaval from stable states in the crude oil price bubble process; and in most of time, the stable state dominates the WTI crude oil price bubbles while the upheaval state usually proves short-lived and accompanies unexpected market events.

  13. Nonequilibrium thermodynamic potentials for continuous-time Markov chains.

    PubMed

    Verley, Gatien

    2016-01-01

    We connect the rare fluctuations of an equilibrium (EQ) process and the typical fluctuations of a nonequilibrium (NE) stationary process. In the framework of large deviation theory, this observation allows us to introduce NE thermodynamic potentials. For continuous-time Markov chains, we identify the relevant pairs of conjugated variables and propose two NE ensembles: one with fixed dynamics and fluctuating time-averaged variables, and another with fixed time-averaged variables, but a fluctuating dynamics. Accordingly, we show that NE processes are equivalent to conditioned EQ processes ensuring that NE potentials are Legendre dual. We find a variational principle satisfied by the NE potentials that reach their maximum in the NE stationary state and whose first derivatives produce the NE equations of state and second derivatives produce the NE Maxwell relations generalizing the Onsager reciprocity relations.

  14. Inferring soil salinity in a drip irrigation system from multi-configuration EMI measurements using adaptive Markov chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zaib Jadoon, Khan; Umer Altaf, Muhammad; McCabe, Matthew Francis; Hoteit, Ibrahim; Muhammad, Nisar; Moghadas, Davood; Weihermüller, Lutz

    2017-10-01

    A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In MCMC the posterior distribution is computed using Bayes' rule. The electromagnetic forward model based on the full solution of Maxwell's equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD Mini-Explorer. Uncertainty in the parameters for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness as compared to layers electrical conductivity are not very informative and are therefore difficult to resolve. Application of the proposed MCMC-based inversion to field measurements in a drip irrigation system demonstrates that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provides useful insight about parameter uncertainty for the assessment of the model outputs.

  15. Patchwork sampling of stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Kürsten, Rüdiger; Behn, Ulrich

    2016-03-01

    We propose a method to sample stationary properties of solutions of stochastic differential equations, which is accurate and efficient if there are rarely visited regions or rare transitions between distinct regions of the state space. The method is based on a complete, nonoverlapping partition of the state space into patches on which the stochastic process is ergodic. On each of these patches we run simulations of the process strictly truncated to the corresponding patch, which allows effective simulations also in rarely visited regions. The correct weight for each patch is obtained by counting the attempted transitions between all different patches. The results are patchworked to cover the whole state space. We extend the concept of truncated Markov chains which is originally formulated for processes which obey detailed balance to processes not fulfilling detailed balance. The method is illustrated by three examples, describing the one-dimensional diffusion of an overdamped particle in a double-well potential, a system of many globally coupled overdamped particles in double-well potentials subject to additive Gaussian white noise, and the overdamped motion of a particle on the circle in a periodic potential subject to a deterministic drift and additive noise. In an appendix we explain how other well-known Markov chain Monte Carlo algorithms can be related to truncated Markov chains.

  16. Markov-chain model of classified atomistic transition states for discrete kinetic Monte Carlo simulations.

    PubMed

    Numazawa, Satoshi; Smith, Roger

    2011-10-01

    Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.

  17. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less

  18. Modeling treatment of ischemic heart disease with partially observable Markov decision processes.

    PubMed

    Hauskrecht, M; Fraser, H

    1998-01-01

    Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead they are very often dependent and interleaved over time, mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of Partially observable Markov decision processes (POMDPs) developed and used in operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In the paper, we show how the POMDP framework could be used to model and solve the problem of the management of patients with ischemic heart disease, and point out modeling advantages of the framework over standard decision formalisms.

  19. Neyman, Markov processes and survival analysis.

    PubMed

    Yang, Grace

    2013-07-01

    J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.

  20. The cutoff phenomenon in finite Markov chains.

    PubMed Central

    Diaconis, P

    1996-01-01

    Natural mixing processes modeled by Markov chains often show a sharp cutoff in their convergence to long-time behavior. This paper presents problems where the cutoff can be proved (card shuffling, the Ehrenfests' urn). It shows that chains with polynomial growth (drunkard's walk) do not show cutoffs. The best general understanding of such cutoffs (high multiplicity of second eigenvalues due to symmetry) is explored. Examples are given where the symmetry is broken but the cutoff phenomenon persists. PMID:11607633

  1. Covariate adjustment of event histories estimated from Markov chains: the additive approach.

    PubMed

    Aalen, O O; Borgan, O; Fekjaer, H

    2001-12-01

    Markov chain models are frequently used for studying event histories that include transitions between several states. An empirical transition matrix for nonhomogeneous Markov chains has previously been developed, including a detailed statistical theory based on counting processes and martingales. In this article, we show how to estimate transition probabilities dependent on covariates. This technique may, e.g., be used for making estimates of individual prognosis in epidemiological or clinical studies. The covariates are included through nonparametric additive models on the transition intensities of the Markov chain. The additive model allows for estimation of covariate-dependent transition intensities, and again a detailed theory exists based on counting processes. The martingale setting now allows for a very natural combination of the empirical transition matrix and the additive model, resulting in estimates that can be expressed as stochastic integrals, and hence their properties are easily evaluated. Two medical examples will be given. In the first example, we study how the lung cancer mortality of uranium miners depends on smoking and radon exposure. In the second example, we study how the probability of being in response depends on patient group and prophylactic treatment for leukemia patients who have had a bone marrow transplantation. A program in R and S-PLUS that can carry out the analyses described here has been developed and is freely available on the Internet.

  2. Long-range memory and non-Markov statistical effects in human sensorimotor coordination

    NASA Astrophysics Data System (ADS)

    M. Yulmetyev, Renat; Emelyanova, Natalya; Hänggi, Peter; Gafarov, Fail; Prokhorov, Alexander

    2002-12-01

    In this paper, the non-Markov statistical processes and long-range memory effects in human sensorimotor coordination are investigated. The theoretical basis of this study is the statistical theory of non-stationary discrete non-Markov processes in complex systems (Phys. Rev. E 62, 6178 (2000)). The human sensorimotor coordination was experimentally studied by means of standard dynamical tapping test on the group of 32 young peoples with tap numbers up to 400. This test was carried out separately for the right and the left hand according to the degree of domination of each brain hemisphere. The numerical analysis of the experimental results was made with the help of power spectra of the initial time correlation function, the memory functions of low orders and the first three points of the statistical spectrum of non-Markovity parameter. Our observations demonstrate, that with the regard to results of the standard dynamic tapping-test it is possible to divide all examinees into five different dynamic types. We have introduced the conflict coefficient to estimate quantitatively the order-disorder effects underlying life systems. The last one reflects the existence of disbalance between the nervous and the motor human coordination. The suggested classification of the neurophysiological activity represents the dynamic generalization of the well-known neuropsychological types and provides the new approach in a modern neuropsychology.

  3. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  4. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  5. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    PubMed

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  6. Inferring phenomenological models of Markov processes from data

    NASA Astrophysics Data System (ADS)

    Rivera, Catalina; Nemenman, Ilya

    Microscopically accurate modeling of stochastic dynamics of biochemical networks is hard due to the extremely high dimensionality of the state space of such networks. Here we propose an algorithm for inference of phenomenological, coarse-grained models of Markov processes describing the network dynamics directly from data, without the intermediate step of microscopically accurate modeling. The approach relies on the linear nature of the Chemical Master Equation and uses Bayesian Model Selection for identification of parsimonious models that fit the data. When applied to synthetic data from the Kinetic Proofreading process (KPR), a common mechanism used by cells for increasing specificity of molecular assembly, the algorithm successfully uncovers the known coarse-grained description of the process. This phenomenological description has been notice previously, but this time it is derived in an automated manner by the algorithm. James S. McDonnell Foundation Grant No. 220020321.

  7. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  8. Markov Processes in Image Processing

    NASA Astrophysics Data System (ADS)

    Petrov, E. P.; Kharina, N. L.

    2018-05-01

    Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.

  9. Hidden Markov models and other machine learning approaches in computational molecular biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldi, P.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In thismore » tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.« less

  10. Decentralized control of Markovian decision processes: Existence Sigma-admissable policies

    NASA Technical Reports Server (NTRS)

    Greenland, A.

    1980-01-01

    The problem of formulating and analyzing Markov decision models having decentralized information and decision patterns is examined. Included are basic examples as well as the mathematical preliminaries needed to understand Markov decision models and, further, to superimpose decentralized decision structures on them. The notion of a variance admissible policy for the model is introduced and it is proved that there exist (possibly nondeterministic) optional policies from the class of variance admissible policies. Directions for further research are explored.

  11. Noise in Nonlinear Dynamical Systems 3 Volume Paperback Set

    NASA Astrophysics Data System (ADS)

    Moss, Frank; McClintock, P. V. E.

    2011-11-01

    Volume 1: List of contributors; Preface; Introduction to volume one; 1. Noise-activated escape from metastable states: an historical view Rolf Landauer; 2. Some Markov methods in the theory of stochastic processes in non-linear dynamical systems R. L. Stratonovich; 3. Langevin equations with coloured noise J. M. Sancho and M. San Miguel; 4. First passage time problems for non-Markovian processes Katja Lindenberg, Bruce J. West and Jaume Masoliver; 5. The projection approach to the Fokker-Planck equation: applications to phenomenological stochastic equations with coloured noises Paolo Grigolini; 6. Methods for solving Fokker-Planck equations with applications to bistable and periodic potentials H. Risken and H. D. Vollmer; 7. Macroscopic potentials, bifurcations and noise in dissipative systems Robert Graham; 8. Transition phenomena in multidimensional systems - models of evolution W. Ebeling and L. Schimansky-Geier; 9. Coloured noise in continuous dynamical systems: a functional calculus approach Peter Hanggi; Appendix. On the statistical treatment of dynamical systems L. Pontryagin, A. Andronov and A. Vitt; Index. Volume 2: List of contributors; Preface; Introduction to volume two; 1. Stochastic processes in quantum mechanical settings Ronald F. Fox; 2. Self-diffusion in non-Markovian condensed-matter systems Toyonori Munakata; 3. Escape from the underdamped potential well M. Buttiker; 4. Effect of noise on discrete dynamical systems with multiple attractors Edgar Knobloch and Jeffrey B. Weiss; 5. Discrete dynamics perturbed by weak noise Peter Talkner and Peter Hanggi; 6. Bifurcation behaviour under modulated control parameters M. Lucke; 7. Period doubling bifurcations: what good are they? Kurt Wiesenfeld; 8. Noise-induced transitions Werner Horsthemke and Rene Lefever; 9. Mechanisms for noise-induced transitions in chemical systems Raymond Kapral and Edward Celarier; 10. State selection dynamics in symmetry-breaking transitions Dilip K. Kondepudi; 11. Noise in a ring-laser gyroscope K. Vogel, H. Risken and W. Schleich; 12. Control of noise and applications to optical systems L. A. Lugiato, G. Broggi, M. Merri and M. A. Pernigo; 13. Transition probabilities and spectral density of fluctuations of noise driven bistable systems M. I. Dykman, M. A. Krivoglaz and S. M. Soskin; Index. Volume 3: List of contributors; Preface; Introduction to volume three; 1. The effects of coloured quadratic noise on a turbulent transition in liquid He II J. T. Tough; 2. Electrohydrodynamic instability of nematic liquid crystals: growth process and influence of noise S. Kai; 3. Suppression of electrohydrodynamic instabilities by external noise Helmut R. Brand; 4. Coloured noise in dye laser fluctuations R. Roy, A. W. Yu and S. Zhu; 5. Noisy dynamics in optically bistable systems E. Arimondo, D. Hennequin and P. Glorieux; 6. Use of an electronic model as a guideline in experiments on transient optical bistability W. Lange; 7. Computer experiments in nonlinear stochastic physics Riccardo Mannella; 8. Analogue simulations of stochastic processes by means of minimum component electronic devices Leone Fronzoni; 9. Analogue techniques for the study of problems in stochastic nonlinear dynamics P. V. E. McClintock and Frank Moss; Index.

  12. The application of Markov decision process in restaurant delivery robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.

  13. Radiative transfer calculated from a Markov chain formalism

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.; House, L. L.

    1978-01-01

    The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.

  14. Markov Chain Monte Carlo in the Analysis of Single-Molecule Experimental Data

    NASA Astrophysics Data System (ADS)

    Kou, S. C.; Xie, X. Sunney; Liu, Jun S.

    2003-11-01

    This article provides a Bayesian analysis of the single-molecule fluorescence lifetime experiment designed to probe the conformational dynamics of a single DNA hairpin molecule. The DNA hairpin's conformational change is initially modeled as a two-state Markov chain, which is not observable and has to be indirectly inferred. The Brownian diffusion of the single molecule, in addition to the hidden Markov structure, further complicates the matter. We show that the analytical form of the likelihood function can be obtained in the simplest case and a Metropolis-Hastings algorithm can be designed to sample from the posterior distribution of the parameters of interest and to compute desired estiamtes. To cope with the molecular diffusion process and the potentially oscillating energy barrier between the two states of the DNA hairpin, we introduce a data augmentation technique to handle both the Brownian diffusion and the hidden Ornstein-Uhlenbeck process associated with the fluctuating energy barrier, and design a more sophisticated Metropolis-type algorithm. Our method not only increases the estimating resolution by several folds but also proves to be successful for model discrimination.

  15. Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems

    NASA Astrophysics Data System (ADS)

    Shen, Zhao-Li; Huang, Ting-Zhu; Carpentieri, Bruno; Wen, Chun; Gu, Xian-Ming

    2018-06-01

    Recently, the adaptive algebraic aggregation multigrid method has been proposed for computing stationary distributions of Markov chains. This method updates aggregates on every iterative cycle to keep high accuracies of coarse-level corrections. Accordingly, its fast convergence rate is well guaranteed, but often a large proportion of time is cost by aggregation processes. In this paper, we show that the aggregates on each level in this method can be utilized to transfer the probability equation of that level into a block linear system. Then we propose a Block-Jacobi relaxation that deals with the block system on each level to smooth error. Some theoretical analysis of this technique is presented, meanwhile it is also adapted to solve PageRank problems. The purpose of this technique is to accelerate the adaptive aggregation multigrid method and its variants for solving Markov chains and PageRank problems. It also attempts to shed some light on new solutions for making aggregation processes more cost-effective for aggregation multigrid methods. Numerical experiments are presented to illustrate the effectiveness of this technique.

  16. Planning treatment of ischemic heart disease with partially observable Markov decision processes.

    PubMed

    Hauskrecht, M; Fraser, H

    2000-03-01

    Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead, they are very often dependent and interleaved over time. This is mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of partially observable Markov decision processes (POMDPs) developed and used in the operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In this paper, we show how the POMDP framework can be used to model and solve the problem of the management of patients with ischemic heart disease (IHD), and demonstrate the modeling advantages of the framework over standard decision formalisms.

  17. The exit-time problem for a Markov jump process

    NASA Astrophysics Data System (ADS)

    Burch, N.; D'Elia, M.; Lehoucq, R. B.

    2014-12-01

    The purpose of this paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. This calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.

  18. Evaluation methodologies for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.

    1984-01-01

    The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.

  19. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  20. Sparse covariance estimation in heterogeneous samples*

    PubMed Central

    Rodríguez, Abel; Lenkoski, Alex; Dobra, Adrian

    2015-01-01

    Standard Gaussian graphical models implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected from heterogeneous populations where such an assumption is not satisfied, leading in turn to nonlinear relationships among variables. To address such situations we explore mixtures of Gaussian graphical models; in particular, we consider both infinite mixtures and infinite hidden Markov models where the emission distributions correspond to Gaussian graphical models. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. As an illustration, we study the trends in foreign exchange rate fluctuations in the pre-Euro era. PMID:26925189

  1. ASSIST user manual

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.; Boerschlein, David P.

    1995-01-01

    Semi-Markov models can be used to analyze the reliability of virtually any fault-tolerant system. However, the process of delineating all the states and transitions in a complex system model can be devastatingly tedious and error prone. The Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST) computer program allows the user to describe the semi-Markov model in a high-level language. Instead of listing the individual model states, the user specifies the rules governing the behavior of the system, and these are used to generate the model automatically. A few statements in the abstract language can describe a very large, complex model. Because no assumptions are made about the system being modeled, ASSIST can be used to generate models describing the behavior of any system. The ASSIST program and its input language are described and illustrated by examples.

  2. Effective degree Markov-chain approach for discrete-time epidemic processes on uncorrelated networks.

    PubMed

    Cai, Chao-Ran; Wu, Zhi-Xi; Guan, Jian-Yue

    2014-11-01

    Recently, Gómez et al. proposed a microscopic Markov-chain approach (MMCA) [S. Gómez, J. Gómez-Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. E 84, 036105 (2011)PLEEE81539-375510.1103/PhysRevE.84.036105] to the discrete-time susceptible-infected-susceptible (SIS) epidemic process and found that the epidemic prevalence obtained by this approach agrees well with that by simulations. However, we found that the approach cannot be straightforwardly extended to a susceptible-infected-recovered (SIR) epidemic process (due to its irreversible property), and the epidemic prevalences obtained by MMCA and Monte Carlo simulations do not match well when the infection probability is just slightly above the epidemic threshold. In this contribution we extend the effective degree Markov-chain approach, proposed for analyzing continuous-time epidemic processes [J. Lindquist, J. Ma, P. Driessche, and F. Willeboordse, J. Math. Biol. 62, 143 (2011)JMBLAJ0303-681210.1007/s00285-010-0331-2], to address discrete-time binary-state (SIS) or three-state (SIR) epidemic processes on uncorrelated complex networks. It is shown that the final epidemic size as well as the time series of infected individuals obtained from this approach agree very well with those by Monte Carlo simulations. Our results are robust to the change of different parameters, including the total population size, the infection probability, the recovery probability, the average degree, and the degree distribution of the underlying networks.

  3. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model.

    PubMed

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-28

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  4. The Influence of Hydroxylation on Maintaining CpG Methylation Patterns: A Hidden Markov Model Approach.

    PubMed

    Giehr, Pascal; Kyriakopoulos, Charalampos; Ficz, Gabriella; Wolf, Verena; Walter, Jörn

    2016-05-01

    DNA methylation and demethylation are opposing processes that when in balance create stable patterns of epigenetic memory. The control of DNA methylation pattern formation by replication dependent and independent demethylation processes has been suggested to be influenced by Tet mediated oxidation of 5mC. Several alternative mechanisms have been proposed suggesting that 5hmC influences either replication dependent maintenance of DNA methylation or replication independent processes of active demethylation. Using high resolution hairpin oxidative bisulfite sequencing data, we precisely determine the amount of 5mC and 5hmC and model the contribution of 5hmC to processes of demethylation in mouse ESCs. We develop an extended hidden Markov model capable of accurately describing the regional contribution of 5hmC to demethylation dynamics. Our analysis shows that 5hmC has a strong impact on replication dependent demethylation, mainly by impairing methylation maintenance.

  5. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  6. Zero-state Markov switching count-data models: an empirical assessment.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2010-01-01

    In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.

  7. Cluster-based adaptive power control protocol using Hidden Markov Model for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Vinutha, C. B.; Nalini, N.; Nagaraja, M.

    2017-06-01

    This paper presents strategies for an efficient and dynamic transmission power control technique, in order to reduce packet drop and hence energy consumption of power-hungry sensor nodes operated in highly non-linear channel conditions of Wireless Sensor Networks. Besides, we also focus to prolong network lifetime and scalability by designing cluster-based network structure. Specifically we consider weight-based clustering approach wherein, minimum significant node is chosen as Cluster Head (CH) which is computed stemmed from the factors distance, remaining residual battery power and received signal strength (RSS). Further, transmission power control schemes to fit into dynamic channel conditions are meticulously implemented using Hidden Markov Model (HMM) where probability transition matrix is formulated based on the observed RSS measurements. Typically, CH estimates initial transmission power of its cluster members (CMs) from RSS using HMM and broadcast this value to its CMs for initialising their power value. Further, if CH finds that there are variations in link quality and RSS of the CMs, it again re-computes and optimises the transmission power level of the nodes using HMM to avoid packet loss due noise interference. We have demonstrated our simulation results to prove that our technique efficiently controls the power levels of sensing nodes to save significant quantity of energy for different sized network.

  8. Stochastic analysis and modeling of abnormally large waves

    NASA Astrophysics Data System (ADS)

    Kuznetsov, Konstantin; Shamin, Roman; Yudin, Aleksandr

    2016-04-01

    In this work stochastics of amplitude characteristics of waves during the freak waves formation was estimated. Also amplitude characteristics of freak wave was modeling with the help of the developed Markov model on the basis of in-situ and numerical experiments. Simulation using the Markov model showed a great similarity of results of in-situ wave measurements[1], results of directly calculating the Euler equations[2] and stochastic modeling data. This work is supported by grant of Russian Foundation for Basic Research (RFBR) n°16-35-00526. 1. K. I. Kuznetsov, A. A. Kurkin, E. N. Pelinovsky and P. D. Kovalev Features of Wind Waves at the Southeastern Coast of Sakhalin according to Bottom Pressure Measurements //Izvestiya, Atmospheric and Oceanic Physics, 2014, Vol. 50, No. 2, pp. 213-220. DOI: 10.1134/S0001433814020066. 2. R.V. Shamin, V.E. Zakharov, A.I. Dyachenko. How probability for freak wave formation can be found // THE EUROPEAN PHYSICAL JOURNAL - SPECIAL TOPICS Volume 185, Number 1, 113-124, DOI: 10.1140/epjst/e2010-01242-y 3.E. N. Pelinovsky, K. I. Kuznetsov, J. Touboul, A. A. Kurkin Bottom pressure caused by passage of a solitary wave within the strongly nonlinear Green-Naghdi model //Doklady Physics, April 2015, Volume 60, Issue 4, pp 171-174. DOI: 10.1134/S1028335815040035

  9. Recursive recovery of Markov transition probabilities from boundary value data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patch, Sarah Kathyrn

    1994-04-01

    In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requiresmore » finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.« less

  10. Dynamic Alignment Models for Neural Coding

    PubMed Central

    Kollmorgen, Sepp; Hahnloser, Richard H. R.

    2014-01-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448

  11. On Markov parameters in system identification

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Longman, Richard W.

    1991-01-01

    A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.

  12. The exit-time problem for a Markov jump process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burch, N.; D'Elia, Marta; Lehoucq, Richard B.

    2014-12-15

    The purpose of our paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developedmore » nonlocal vector calculus. Furthermore, this calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.« less

  13. RANDOM EVOLUTIONS, MARKOV CHAINS, AND SYSTEMS OF PARTIAL DIFFERENTIAL EQUATIONS

    PubMed Central

    Griego, R. J.; Hersh, R.

    1969-01-01

    Several authors have considered Markov processes defined by the motion of a particle on a fixed line with a random velocity1, 6, 8, 10 or a random diffusivity.5, 12 A “random evolution” is a natural but apparently new generalization of this notion. In this note we hope to show that this concept leads to simple and powerful applications of probabilistic tools to initial-value problems of both parabolic and hyperbolic type. We obtain existence theorems, representation theorems, and asymptotic formulas, both old and new. PMID:16578690

  14. ASSIST: User's manual

    NASA Technical Reports Server (NTRS)

    Johnson, S. C.

    1986-01-01

    Semi-Markov models can be used to compute the reliability of virtually any fault-tolerant system. However, the process of delineating all of the states and transitions in a model of a complex system can be devastingly tedious and error-prone. The ASSIST program allows the user to describe the semi-Markov model in a high-level language. Instead of specifying the individual states of the model, the user specifies the rules governing the behavior of the system and these are used by ASSIST to automatically generate the model. The ASSIST program is described and illustrated by examples.

  15. Dynamic Programming for Structured Continuous Markov Decision Problems

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu

    2004-01-01

    We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.

  16. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  17. A Network Analysis of Countries’ Export Flows: Firm Grounds for the Building Blocks of the Economy

    PubMed Central

    Caldarelli, Guido; Cristelli, Matthieu; Gabrielli, Andrea; Pietronero, Luciano; Scala, Antonio; Tacchella, Andrea

    2012-01-01

    In this paper we analyze the bipartite network of countries and products from UN data on country production. We define the country-country and product-product projected networks and introduce a novel method of filtering information based on elements’ similarity. As a result we find that country clustering reveals unexpected socio-geographic links among the most competing countries. On the same footings the products clustering can be efficiently used for a bottom-up classification of produced goods. Furthermore we mathematically reformulate the “reflections method” introduced by Hidalgo and Hausmann as a fixpoint problem; such formulation highlights some conceptual weaknesses of the approach. To overcome such an issue, we introduce an alternative methodology (based on biased Markov chains) that allows to rank countries in a conceptually consistent way. Our analysis uncovers a strong non-linear interaction between the diversification of a country and the ubiquity of its products, thus suggesting the possible need of moving towards more efficient and direct non-linear fixpoint algorithms to rank countries and products in the global market. PMID:23094044

  18. Structured Spatial Modeling and Mapping of Domestic Violence Against Women of Reproductive Age in Rwanda.

    PubMed

    Habyarimana, Faustin; Zewotir, Temesgen; Ramroop, Shaun

    2018-03-01

    The main objective of this study was to assess the risk factors and spatial correlates of domestic violence against women of reproductive age in Rwanda. A structured spatial approach was used to account for the nonlinear nature of some covariates and the spatial variability on domestic violence. The nonlinear effect was modeled through second-order random walk, and the structured spatial effect was modeled through Gaussian Markov Random Fields specified as an intrinsic conditional autoregressive model. The data from the Rwanda Demographic and Health Survey 2014/2015 were used as an application. The findings of this study revealed that the risk factors of domestic violence against women are the wealth quintile of the household, the size of the household, the husband or partner's age, the husband or partner's level of education, ownership of the house, polygamy, the alcohol consumption status of the husband or partner, the woman's perception of wife-beating attitude, and the use of contraceptive methods. The study also highlighted the significant spatial variation of domestic violence against women at district level.

  19. Pitch angle scattering of relativistic electrons from stationary magnetic waves: Continuous Markov process and quasilinear theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemons, Don S.

    2012-01-15

    We develop a Markov process theory of charged particle scattering from stationary, transverse, magnetic waves. We examine approximations that lead to quasilinear theory, in particular the resonant diffusion approximation. We find that, when appropriate, the resonant diffusion approximation simplifies the result of the weak turbulence approximation without significant further restricting the regime of applicability. We also explore a theory generated by expanding drift and diffusion rates in terms of a presumed small correlation time. This small correlation time expansion leads to results valid for relatively small pitch angle and large wave energy density - a regime that may govern pitchmore » angle scattering of high-energy electrons into the geomagnetic loss cone.« less

  20. Resolvent-Techniques for Multiple Exercise Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Sören, E-mail: christensen@math.uni-kiel.de; Lempa, Jukka, E-mail: jukka.lempa@hioa.no

    2015-02-15

    We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristicsmore » of the problems can be identified more explicitly. We illustrate the main results with explicit examples.« less

  1. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    PubMed

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  2. Generalized species sampling priors with latent Beta reinforcements

    PubMed Central

    Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele

    2014-01-01

    Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462

  3. Zipf exponent of trajectory distribution in the hidden Markov model

    NASA Astrophysics Data System (ADS)

    Bochkarev, V. V.; Lerner, E. Yu

    2014-03-01

    This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.

  4. Noise can speed convergence in Markov chains.

    PubMed

    Franzke, Brandon; Kosko, Bart

    2011-10-01

    A new theorem shows that noise can speed convergence to equilibrium in discrete finite-state Markov chains. The noise applies to the state density and helps the Markov chain explore improbable regions of the state space. The theorem ensures that a stochastic-resonance noise benefit exists for states that obey a vector-norm inequality. Such noise leads to faster convergence because the noise reduces the norm components. A corollary shows that a noise benefit still occurs if the system states obey an alternate norm inequality. This leads to a noise-benefit algorithm that requires knowledge of the steady state. An alternative blind algorithm uses only past state information to achieve a weaker noise benefit. Simulations illustrate the predicted noise benefits in three well-known Markov models. The first model is a two-parameter Ehrenfest diffusion model that shows how noise benefits can occur in the class of birth-death processes. The second model is a Wright-Fisher model of genotype drift in population genetics. The third model is a chemical reaction network of zeolite crystallization. A fourth simulation shows a convergence rate increase of 64% for states that satisfy the theorem and an increase of 53% for states that satisfy the corollary. A final simulation shows that even suboptimal noise can speed convergence if the noise applies over successive time cycles. Noise benefits tend to be sharpest in Markov models that do not converge quickly and that do not have strong absorbing states.

  5. The algebra of the general Markov model on phylogenetic trees and networks.

    PubMed

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  6. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    NASA Astrophysics Data System (ADS)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  7. Incorporating approximation error in surrogate based Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.; Li, W.; Wu, L.

    2015-12-01

    There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.

  8. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Liu, Hua; Wu, Wen

    2017-01-01

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF). PMID:28608843

  9. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Liu, Hua; Wu, Wen

    2017-06-13

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF).

  10. Study of selected phenotype switching strategies in time varying environment

    NASA Astrophysics Data System (ADS)

    Horvath, Denis; Brutovsky, Branislav

    2016-03-01

    Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback-Leibler functional distances and the Hamming distance.

  11. Inverse Modeling Using Markov Chain Monte Carlo Aided by Adaptive Stochastic Collocation Method with Transformation

    NASA Astrophysics Data System (ADS)

    Zhang, D.; Liao, Q.

    2016-12-01

    The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of computational efficiency.

  12. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    PubMed Central

    Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide

    2017-01-01

    Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889

  13. Pavement maintenance optimization model using Markov Decision Processes

    NASA Astrophysics Data System (ADS)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  14. Asymptotic inference in system identification for the atom maser.

    PubMed

    Catana, Catalin; van Horssen, Merlijn; Guta, Madalin

    2012-11-28

    System identification is closely related to control theory and plays an increasing role in quantum engineering. In the quantum set-up, system identification is usually equated to process tomography, i.e. estimating a channel by probing it repeatedly with different input states. However, for quantum dynamical systems such as quantum Markov processes, it is more natural to consider the estimation based on continuous measurements of the output, with a given input that may be stationary. We address this problem using asymptotic statistics tools, for the specific example of estimating the Rabi frequency of an atom maser. We compute the Fisher information of different measurement processes as well as the quantum Fisher information of the atom maser, and establish the local asymptotic normality of these statistical models. The statistical notions can be expressed in terms of spectral properties of certain deformed Markov generators, and the connection to large deviations is briefly discussed.

  15. Measuring the impact of final demand on global production system based on Markov process

    NASA Astrophysics Data System (ADS)

    Xing, Lizhi; Guan, Jun; Wu, Shan

    2018-07-01

    Input-output table is a comprehensive and detailed in describing the national economic systems, consisting of supply and demand information among various industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can depict the structural properties of social and economic systems, and reveal the complicated relationships between the inner hierarchies and the external macroeconomic functions. This paper tried to measure the globalization degree of industrial sectors on the global value chain. Firstly, it constructed inter-country input-output network models to reproduce the topological structure of global economic system. Secondly, it regarded the propagation of intermediate goods on the global value chain as Markov process and introduced counting first passage betweenness to quantify the added processing amount when globally final demand stimulates this production system. Thirdly, it analyzed the features of globalization at both global and country-sector level

  16. Sorting processes with energy-constrained comparisons*

    NASA Astrophysics Data System (ADS)

    Geissmann, Barbara; Penna, Paolo

    2018-05-01

    We study very simple sorting algorithms based on a probabilistic comparator model. In this model, errors in comparing two elements are due to (1) the energy or effort put in the comparison and (2) the difference between the compared elements. Such algorithms repeatedly compare and swap pairs of randomly chosen elements, and they correspond to natural Markovian processes. The study of these Markov chains reveals an interesting phenomenon. Namely, in several cases, the algorithm that repeatedly compares only adjacent elements is better than the one making arbitrary comparisons: in the long-run, the former algorithm produces sequences that are "better sorted". The analysis of the underlying Markov chain poses interesting questions as the latter algorithm yields a nonreversible chain, and therefore its stationary distribution seems difficult to calculate explicitly. We nevertheless provide bounds on the stationary distributions and on the mixing time of these processes in several restrictions.

  17. Renormalization group theory for percolation in time-varying networks.

    PubMed

    Karschau, Jens; Zimmerling, Marco; Friedrich, Benjamin M

    2018-05-22

    Motivated by multi-hop communication in unreliable wireless networks, we present a percolation theory for time-varying networks. We develop a renormalization group theory for a prototypical network on a regular grid, where individual links switch stochastically between active and inactive states. The question whether a given source node can communicate with a destination node along paths of active links is equivalent to a percolation problem. Our theory maps the temporal existence of multi-hop paths on an effective two-state Markov process. We show analytically how this Markov process converges towards a memoryless Bernoulli process as the hop distance between source and destination node increases. Our work extends classical percolation theory to the dynamic case and elucidates temporal correlations of message losses. Quantification of temporal correlations has implications for the design of wireless communication and control protocols, e.g. in cyber-physical systems such as self-organized swarms of drones or smart traffic networks.

  18. Analyzing Single-Molecule Protein Transportation Experiments via Hierarchical Hidden Markov Models

    PubMed Central

    Chen, Yang; Shen, Kuang

    2017-01-01

    To maintain proper cellular functions, over 50% of proteins encoded in the genome need to be transported to cellular membranes. The molecular mechanism behind such a process, often referred to as protein targeting, is not well understood. Single-molecule experiments are designed to unveil the detailed mechanisms and reveal the functions of different molecular machineries involved in the process. The experimental data consist of hundreds of stochastic time traces from the fluorescence recordings of the experimental system. We introduce a Bayesian hierarchical model on top of hidden Markov models (HMMs) to analyze these data and use the statistical results to answer the biological questions. In addition to resolving the biological puzzles and delineating the regulating roles of different molecular complexes, our statistical results enable us to propose a more detailed mechanism for the late stages of the protein targeting process. PMID:28943680

  19. Computation of entropy and Lyapunov exponent by a shift transform.

    PubMed

    Matsuoka, Chihiro; Hiraide, Koichi

    2015-10-01

    We present a novel computational method to estimate the topological entropy and Lyapunov exponent of nonlinear maps using a shift transform. Unlike the computation of periodic orbits or the symbolic dynamical approach by the Markov partition, the method presented here does not require any special techniques in computational and mathematical fields to calculate these quantities. In spite of its simplicity, our method can accurately capture not only the chaotic region but also the non-chaotic region (window region) such that it is important physically but the (Lebesgue) measure zero and usually hard to calculate or observe. Furthermore, it is shown that the Kolmogorov-Sinai entropy of the Sinai-Ruelle-Bowen measure (the physical measure) coincides with the topological entropy.

  20. Computation of entropy and Lyapunov exponent by a shift transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuoka, Chihiro, E-mail: matsuoka.chihiro.mm@ehime-u.ac.jp; Hiraide, Koichi

    2015-10-15

    We present a novel computational method to estimate the topological entropy and Lyapunov exponent of nonlinear maps using a shift transform. Unlike the computation of periodic orbits or the symbolic dynamical approach by the Markov partition, the method presented here does not require any special techniques in computational and mathematical fields to calculate these quantities. In spite of its simplicity, our method can accurately capture not only the chaotic region but also the non-chaotic region (window region) such that it is important physically but the (Lebesgue) measure zero and usually hard to calculate or observe. Furthermore, it is shown thatmore » the Kolmogorov-Sinai entropy of the Sinai-Ruelle-Bowen measure (the physical measure) coincides with the topological entropy.« less

  1. tICA-Metadynamics: Accelerating Metadynamics by Using Kinetically Selected Collective Variables.

    PubMed

    M Sultan, Mohammad; Pande, Vijay S

    2017-06-13

    Metadynamics is a powerful enhanced molecular dynamics sampling method that accelerates simulations by adding history-dependent multidimensional Gaussians along selective collective variables (CVs). In practice, choosing a small number of slow CVs remains challenging due to the inherent high dimensionality of biophysical systems. Here we show that time-structure based independent component analysis (tICA), a recent advance in Markov state model literature, can be used to identify a set of variationally optimal slow coordinates for use as CVs for Metadynamics. We show that linear and nonlinear tICA-Metadynamics can complement existing MD studies by explicitly sampling the system's slowest modes and can even drive transitions along the slowest modes even when no such transitions are observed in unbiased simulations.

  2. Simple stochastic model for El Niño with westerly wind bursts

    PubMed Central

    Thual, Sulian; Majda, Andrew J.; Chen, Nan; Stechmann, Samuel N.

    2016-01-01

    Atmospheric wind bursts in the tropics play a key role in the dynamics of the El Niño Southern Oscillation (ENSO). A simple modeling framework is proposed that summarizes this relationship and captures major features of the observational record while remaining physically consistent and amenable to detailed analysis. Within this simple framework, wind burst activity evolves according to a stochastic two-state Markov switching–diffusion process that depends on the strength of the western Pacific warm pool, and is coupled to simple ocean–atmosphere processes that are otherwise deterministic, stable, and linear. A simple model with this parameterization and no additional nonlinearities reproduces a realistic ENSO cycle with intermittent El Niño and La Niña events of varying intensity and strength as well as realistic buildup and shutdown of wind burst activity in the western Pacific. The wind burst activity has a direct causal effect on the ENSO variability: in particular, it intermittently triggers regular El Niño or La Niña events, super El Niño events, or no events at all, which enables the model to capture observed ENSO statistics such as the probability density function and power spectrum of eastern Pacific sea surface temperatures. The present framework provides further theoretical and practical insight on the relationship between wind burst activity and the ENSO. PMID:27573821

  3. Transition-Independent Decentralized Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Becker, Raphen; Silberstein, Shlomo; Lesser, Victor; Goldman, Claudia V.; Morris, Robert (Technical Monitor)

    2003-01-01

    There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied up by a global reward function that depends on both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.

  4. AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT

    NASA Astrophysics Data System (ADS)

    Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi

    In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.

  5. Accelerated decomposition techniques for large discounted Markov decision processes

    NASA Astrophysics Data System (ADS)

    Larach, Abdelhadi; Chafik, S.; Daoui, C.

    2017-12-01

    Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.

  6. Bayesian experimental design for models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; Pettitt, Anthony N

    2013-12-01

    In this paper we present a methodology for designing experiments for efficiently estimating the parameters of models with computationally intractable likelihoods. The approach combines a commonly used methodology for robust experimental design, based on Markov chain Monte Carlo sampling, with approximate Bayesian computation (ABC) to ensure that no likelihood evaluations are required. The utility function considered for precise parameter estimation is based upon the precision of the ABC posterior distribution, which we form efficiently via the ABC rejection algorithm based on pre-computed model simulations. Our focus is on stochastic models and, in particular, we investigate the methodology for Markov process models of epidemics and macroparasite population evolution. The macroparasite example involves a multivariate process and we assess the loss of information from not observing all variables. © 2013, The International Biometric Society.

  7. Markov stochasticity coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  8. ModFossa: A library for modeling ion channels using Python.

    PubMed

    Ferneyhough, Gareth B; Thibealut, Corey M; Dascalu, Sergiu M; Harris, Frederick C

    2016-06-01

    The creation and simulation of ion channel models using continuous-time Markov processes is a powerful and well-used tool in the field of electrophysiology and ion channel research. While several software packages exist for the purpose of ion channel modeling, most are GUI based, and none are available as a Python library. In an attempt to provide an easy-to-use, yet powerful Markov model-based ion channel simulator, we have developed ModFossa, a Python library supporting easy model creation and stimulus definition, complete with a fast numerical solver, and attractive vector graphics plotting.

  9. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  10. Dynamic Noise and its Role in Understanding Epidemiological Processes

    NASA Astrophysics Data System (ADS)

    Stollenwerk, Nico; Aguiar, Maíra

    2010-09-01

    We investigate the role of dynamic noise in understanding epidemiological systems, such as influenza or dengue fever by deriving stochastic ordinary differential equations from markov processes for discrete populations. This approach allows for an easy analysis of dynamical noise transitions between co-existing attractors.

  11. Analyzing a stochastic time series obeying a second-order differential equation.

    PubMed

    Lehle, B; Peinke, J

    2015-06-01

    The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series.

  12. Using Bayesian Nonparametric Hidden Semi-Markov Models to Disentangle Affect Processes during Marital Interaction

    PubMed Central

    Griffin, William A.; Li, Xun

    2016-01-01

    Sequential affect dynamics generated during the interaction of intimate dyads, such as married couples, are associated with a cascade of effects—some good and some bad—on each partner, close family members, and other social contacts. Although the effects are well documented, the probabilistic structures associated with micro-social processes connected to the varied outcomes remain enigmatic. Using extant data we developed a method of classifying and subsequently generating couple dynamics using a Hierarchical Dirichlet Process Hidden semi-Markov Model (HDP-HSMM). Our findings indicate that several key aspects of existing models of marital interaction are inadequate: affect state emissions and their durations, along with the expected variability differences between distressed and nondistressed couples are present but highly nuanced; and most surprisingly, heterogeneity among highly satisfied couples necessitate that they be divided into subgroups. We review how this unsupervised learning technique generates plausible dyadic sequences that are sensitive to relationship quality and provide a natural mechanism for computational models of behavioral and affective micro-social processes. PMID:27187319

  13. Markov chain decision model for urinary incontinence procedures.

    PubMed

    Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha

    2017-03-13

    Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.

  14. Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.

    PubMed

    Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka

    2014-02-01

    In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain.

  15. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  16. Sequence2Vec: a novel embedding approach for modeling transcription factor binding affinity landscape.

    PubMed

    Dai, Hanjun; Umarov, Ramzan; Kuwahara, Hiroyuki; Li, Yu; Song, Le; Gao, Xin

    2017-11-15

    An accurate characterization of transcription factor (TF)-DNA affinity landscape is crucial to a quantitative understanding of the molecular mechanisms underpinning endogenous gene regulation. While recent advances in biotechnology have brought the opportunity for building binding affinity prediction methods, the accurate characterization of TF-DNA binding affinity landscape still remains a challenging problem. Here we propose a novel sequence embedding approach for modeling the transcription factor binding affinity landscape. Our method represents DNA binding sequences as a hidden Markov model which captures both position specific information and long-range dependency in the sequence. A cornerstone of our method is a novel message passing-like embedding algorithm, called Sequence2Vec, which maps these hidden Markov models into a common nonlinear feature space and uses these embedded features to build a predictive model. Our method is a novel combination of the strength of probabilistic graphical models, feature space embedding and deep learning. We conducted comprehensive experiments on over 90 large-scale TF-DNA datasets which were measured by different high-throughput experimental technologies. Sequence2Vec outperforms alternative machine learning methods as well as the state-of-the-art binding affinity prediction methods. Our program is freely available at https://github.com/ramzan1990/sequence2vec. xin.gao@kaust.edu.sa or lsong@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  17. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  18. A systematic review of Markov models evaluating multicomponent disease management programs in diabetes.

    PubMed

    Kirsch, Florian

    2015-01-01

    Diabetes is the most expensive chronic disease; therefore, disease management programs (DMPs) were introduced. The aim of this review is to determine whether Markov models are adequate to evaluate the cost-effectiveness of complex interventions such as DMPs. Additionally, the quality of the models was evaluated using Philips and Caro quality appraisals. The five reviewed models incorporated the DMP into the model differently: two models integrated effectiveness rates derived from one clinical trial/meta-analysis and three models combined interventions from different sources into a DMP. The results range from cost savings and a QALY gain to costs of US$85,087 per QALY. The Spearman's rank coefficient assesses no correlation between the quality appraisals. With restrictions to the data selection process, Markov models are adequate to determine the cost-effectiveness of DMPs; however, to allow prioritization of medical services, more flexibility in the models is necessary to enable the evaluation of single additional interventions.

  19. Monitoring volcano activity through Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Cassisi, C.; Montalto, P.; Prestifilippo, M.; Aliotta, M.; Cannata, A.; Patanè, D.

    2013-12-01

    During 2011-2013, Mt. Etna was mainly characterized by cyclic occurrences of lava fountains, totaling to 38 episodes. During this time interval Etna volcano's states (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN), whose automatic recognition is very useful for monitoring purposes, turned out to be strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area. Since RMS time series behavior is considered to be stochastic, we can try to model the system generating its values, assuming to be a Markov process, by using Hidden Markov models (HMMs). HMMs are a powerful tool in modeling any time-varying series. HMMs analysis seeks to recover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by the SAX (Symbolic Aggregate approXimation) technique, which maps RMS time series values with discrete literal emissions. The experiments show how it is possible to guess volcano states by means of HMMs and SAX.

  20. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    PubMed Central

    Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-01-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629

  1. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    PubMed Central

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-01-01

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088

  2. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    NASA Astrophysics Data System (ADS)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  3. Equivalent Markov-Renewal Processes.

    DTIC Science & Technology

    1979-12-01

    By the Perron - Frobenius theorem we must have y - w. Thus n is the only initial distribution that yields a renewal process. Example 2.4.2. Burke’s... Perron -Frobenitis Theorem (31 there Is a unique largest elgenvalue of Q(-) which is positive, and that eigen- value has an associated left and right

  4. An Application of Markov Chains and a Monte-Carlo Simulation to Decision-Making Behavior of an Educational Administrator

    ERIC Educational Resources Information Center

    Yoda, Koji

    1973-01-01

    Develops models to systematically forecast the tendency of an educational administrator in charge of personnel selection processes to shift from one decision strategy to another under generally stable environmental conditions. Urges further research on these processes by educational planners. (JF)

  5. Impact of meteorological factors on the spatiotemporal patterns of dengue fever incidence.

    PubMed

    Chien, Lung-Chang; Yu, Hwa-Lung

    2014-12-01

    Dengue fever is one of the most widespread vector-borne diseases and has caused more than 50 million infections annually over the world. For the purposes of disease prevention and climate change health impact assessment, it is crucial to understand the weather-disease associations for dengue fever. This study investigated the nonlinear delayed impact of meteorological conditions on the spatiotemporal variations of dengue fever in southern Taiwan during 1998-2011. We present a novel integration of a distributed lag nonlinear model and Markov random fields to assess the nonlinear lagged effects of weather variables on temporal dynamics of dengue fever and to account for the geographical heterogeneity. This study identified the most significant meteorological measures to dengue fever variations, i.e., weekly minimum temperature, and the weekly maximum 24-hour rainfall, by obtaining the relative risk (RR) with respect to disease counts and a continuous 20-week lagged time. Results show that RR increased as minimum temperature increased, especially for the lagged period 5-18 weeks, and also suggest that the time to high disease risks can be decreased. Once the occurrence of maximum 24-hour rainfall is >50 mm, an associated increased RR lasted for up to 15 weeks. A temporary one-month decrease in the RR of dengue fever is noted following the extreme rain. In addition, the elevated incidence risk is identified in highly populated areas. Our results highlight the high nonlinearity of temporal lagged effects and magnitudes of temperature and rainfall on dengue fever epidemics. The results can be a practical reference for the early warning of dengue fever. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  7. Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qingda, E-mail: weiqd@hqu.edu.cn; Chen, Xian, E-mail: chenxian@amss.ac.cn

    In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation andmore » obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.« less

  8. Job-mix modeling and system analysis of an aerospace multiprocessor.

    NASA Technical Reports Server (NTRS)

    Mallach, E. G.

    1972-01-01

    An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.

  9. A flowgraph model for bladder carcinoma

    PubMed Central

    2014-01-01

    Background Superficial bladder cancer has been the subject of numerous studies for many years, but the evolution of the disease still remains not well understood. After the tumor has been surgically removed, it may reappear at a similar level of malignancy or progress to a higher level. The process may be reasonably modeled by means of a Markov process. However, in order to more completely model the evolution of the disease, this approach is insufficient. The semi-Markov framework allows a more realistic approach, but calculations become frequently intractable. In this context, flowgraph models provide an efficient approach to successfully manage the evolution of superficial bladder carcinoma. Our aim is to test this methodology in this particular case. Results We have built a successful model for a simple but representative case. Conclusion The flowgraph approach is suitable for modeling of superficial bladder cancer. PMID:25080066

  10. Stochastic models for the Trojan Y-Chromosome eradication strategy of an invasive species.

    PubMed

    Wang, Xueying; Walton, Jay R; Parshad, Rana D

    2016-01-01

    The Trojan Y-Chromosome (TYC) strategy, an autocidal genetic biocontrol method, has been proposed to eliminate invasive alien species. In this work, we develop a Markov jump process model for this strategy, and we verify that there is a positive probability for wild-type females going extinct within a finite time. Moreover, when sex-reversed Trojan females are introduced at a constant population size, we formulate a stochastic differential equation (SDE) model as an approximation to the proposed Markov jump process model. Using the SDE model, we investigate the probability distribution and expectation of the extinction time of wild-type females by solving Kolmogorov equations associated with these statistics. The results indicate how the probability distribution and expectation of the extinction time are shaped by the initial conditions and the model parameters.

  11. Sieve estimation in a Markov illness-death process under dual censoring.

    PubMed

    Boruvka, Audrey; Cook, Richard J

    2016-04-01

    Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Bayesian structural inference for hidden processes.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ε-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ε-machines, irrespective of estimated transition probabilities. Properties of ε-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  13. Bayesian structural inference for hidden processes

    NASA Astrophysics Data System (ADS)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  14. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  15. Markov model of the loan portfolio dynamics considering influence of management and external economic factors

    NASA Astrophysics Data System (ADS)

    Bozhalkina, Yana; Timofeeva, Galina

    2016-12-01

    Mathematical model of loan portfolio in the form of a controlled Markov chain with discrete time is considered. It is assumed that coefficients of migration matrix depend on corrective actions and external factors. Corrective actions include process of receiving applications, interaction with existing solvent and insolvent clients. External factors are macroeconomic indicators, such as inflation and unemployment rates, exchange rates, consumer price indices, etc. Changes in corrective actions adjust the intensity of transitions in the migration matrix. The mathematical model for forecasting the credit portfolio structure taking into account a cumulative impact of internal and external changes is obtained.

  16. Communication: Introducing prescribed biases in out-of-equilibrium Markov models

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.

    2018-03-01

    Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.

  17. zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm.

    PubMed

    Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas

    2013-11-22

    Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.

  18. The Markov blankets of life: autonomy, active inference and the free energy principle

    PubMed Central

    Palacios, Ensor; Friston, Karl; Kiverstein, Julian

    2018-01-01

    This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens, in terms of the presence of Markov blankets under the active inference scheme—a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets—all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment. PMID:29343629

  19. Entanglement revival can occur only when the system-environment state is not a Markov state

    NASA Astrophysics Data System (ADS)

    Sargolzahi, Iman

    2018-06-01

    Markov states have been defined for tripartite quantum systems. In this paper, we generalize the definition of the Markov states to arbitrary multipartite case and find the general structure of an important subset of them, which we will call strong Markov states. In addition, we focus on an important property of the Markov states: If the initial state of the whole system-environment is a Markov state, then each localized dynamics of the whole system-environment reduces to a localized subdynamics of the system. This provides us a necessary condition for entanglement revival in an open quantum system: Entanglement revival can occur only when the system-environment state is not a Markov state. To illustrate (a part of) our results, we consider the case that the environment is modeled as classical. In this case, though the correlation between the system and the environment remains classical during the evolution, the change of the state of the system-environment, from its initial Markov state to a state which is not a Markov one, leads to the entanglement revival in the system. This shows that the non-Markovianity of a state is not equivalent to the existence of non-classical correlation in it, in general.

  20. Markov Chain Estimation of Avian Seasonal Fecundity, Presentation

    EPA Science Inventory

    Avian seasonal fecundity is of interest from evolutionary, ecological, and conservation perspectives. However, direct estimation of seasonal fecundity is difficult, especially with multibrooded birds, and models representing the renesting and quitting processes are usually requi...

  1. A Hierarchical Multivariate Bayesian Approach to Ensemble Model output Statistics in Atmospheric Prediction

    DTIC Science & Technology

    2017-09-01

    efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components

  2. Markov vs. Hurst-Kolmogorov behaviour identification in hydroclimatic processes

    NASA Astrophysics Data System (ADS)

    Dimitriadis, Panayiotis; Gournari, Naya; Koutsoyiannis, Demetris

    2016-04-01

    Hydroclimatic processes are usually modelled either by exponential decay of the autocovariance function, i.e., Markovian behaviour, or power type decay, i.e., long-term persistence (or else Hurst-Kolmogorov behaviour). For the identification and quantification of such behaviours several graphical stochastic tools can be used such as the climacogram (i.e., plot of the variance of the averaged process vs. scale), autocovariance, variogram, power spectrum etc. with the former usually exhibiting smaller statistical uncertainty as compared to the others. However, most methodologies including these tools are based on the expected value of the process. In this analysis, we explore a methodology that combines both the practical use of a graphical representation of the internal structure of the process as well as the statistical robustness of the maximum-likelihood estimation. For validation and illustration purposes, we apply this methodology to fundamental stochastic processes, such as Markov and Hurst-Kolmogorov type ones. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.

  3. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  4. STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning

    PubMed Central

    Kappel, David; Nessler, Bernhard; Maass, Wolfgang

    2014-01-01

    In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task. PMID:24675787

  5. A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains

    NASA Astrophysics Data System (ADS)

    Gan, Tingyue; Cameron, Maria

    2017-06-01

    Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.

  6. Animal vocal sequences: not the Markov chains we thought they were.

    PubMed

    Kershenbaum, Arik; Bowles, Ann E; Freeberg, Todd M; Jin, Dezhe Z; Lameira, Adriano R; Bohn, Kirsten

    2014-10-07

    Many animals produce vocal sequences that appear complex. Most researchers assume that these sequences are well characterized as Markov chains (i.e. that the probability of a particular vocal element can be calculated from the history of only a finite number of preceding elements). However, this assumption has never been explicitly tested. Furthermore, it is unclear how language could evolve in a single step from a Markovian origin, as is frequently assumed, as no intermediate forms have been found between animal communication and human language. Here, we assess whether animal taxa produce vocal sequences that are better described by Markov chains, or by non-Markovian dynamics such as the 'renewal process' (RP), characterized by a strong tendency to repeat elements. We examined vocal sequences of seven taxa: Bengalese finches Lonchura striata domestica, Carolina chickadees Poecile carolinensis, free-tailed bats Tadarida brasiliensis, rock hyraxes Procavia capensis, pilot whales Globicephala macrorhynchus, killer whales Orcinus orca and orangutans Pongo spp. The vocal systems of most of these species are more consistent with a non-Markovian RP than with the Markovian models traditionally assumed. Our data suggest that non-Markovian vocal sequences may be more common than Markov sequences, which must be taken into account when evaluating alternative hypotheses for the evolution of signalling complexity, and perhaps human language origins. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. Which Came First, the MCnest or the MCnugget? A Survey of Markov Chain Applications in Avian Ecology, Population Biology and Chemical Risk Assessment.

    EPA Science Inventory

    Stochastic processes, such as survival and reproductive success, govern the trajectories of animal populations. Models of such processes have become increasingly important in understanding the effects of environmental change and anthropogenic disturbance on the ability of popula...

  8. Information distribution in distributed microprocessor based flight control systems

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1977-01-01

    This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.

  9. Theory of multinonlinear media and its application to the soliton processes in ferrite–ferroelectric structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cherkasskii, M. A., E-mail: macherkasskii@hotmail.com; Nikitin, A. A.; Kalinikos, B. A.

    A theory is developed to describe the wave processes that occur in waveguide media having several types of nonlinearity, specifically, multinonlinear media. It is shown that the nonlinear Schrödinger equation can be used to describe the general wave process that occurs in such media. The competition between the electric wave nonlinearity and the magnetic wave nonlinearity in a layered multinonlinear ferrite–ferroelectric structure is found to change a total repulsive nonlinearity into a total attractive nonlinearity.

  10. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution.

    PubMed

    Djordjevic, Ivan B

    2015-08-24

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled.

  11. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution

    PubMed Central

    Djordjevic, Ivan B.

    2015-01-01

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually coupled. PMID:26305258

  12. Quantum Mechanics, Pattern Recognition, and the Mammalian Brain

    NASA Astrophysics Data System (ADS)

    Chapline, George

    2008-10-01

    Although the usual way of representing Markov processes is time asymmetric, there is a way of describing Markov processes, due to Schrodinger, which is time symmetric. This observation provides a link between quantum mechanics and the layered Bayesian networks that are often used in automated pattern recognition systems. In particular, there is a striking formal similarity between quantum mechanics and a particular type of Bayesian network, the Helmholtz machine, which provides a plausible model for how the mammalian brain recognizes important environmental situations. One interesting aspect of this relationship is that the "wake-sleep" algorithm for training a Helmholtz machine is very similar to the problem of finding the potential for the multi-channel Schrodinger equation. As a practical application of this insight it may be possible to use inverse scattering techniques to study the relationship between human brain wave patterns, pattern recognition, and learning. We also comment on whether there is a relationship between quantum measurements and consciousness.

  13. Complete protein-protein association kinetics in atomic detail revealed by molecular dynamics simulations and Markov modelling

    NASA Astrophysics Data System (ADS)

    Plattner, Nuria; Doerr, Stefan; de Fabritiis, Gianni; Noé, Frank

    2017-10-01

    Protein-protein association is fundamental to many life processes. However, a microscopic model describing the structures and kinetics during association and dissociation is lacking on account of the long lifetimes of associated states, which have prevented efficient sampling by direct molecular dynamics (MD) simulations. Here we demonstrate protein-protein association and dissociation in atomistic resolution for the ribonuclease barnase and its inhibitor barstar by combining adaptive high-throughput MD simulations and hidden Markov modelling. The model reveals experimentally consistent intermediate structures, energetics and kinetics on timescales from microseconds to hours. A variety of flexibly attached intermediates and misbound states funnel down to a transition state and a native basin consisting of the loosely bound near-native state and the tightly bound crystallographic state. These results offer a deeper level of insight into macromolecular recognition and our approach opens the door for understanding and manipulating a wide range of macromolecular association processes.

  14. Copula-based analysis of rhythm

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. Lanfredi

    2016-06-01

    In this paper we establish stochastic profiles of the rhythm for three languages: English, Japanese and Spanish. We model the increase or decrease of the acoustical energy, collected into three bands coming from the acoustic signal. The number of parameters needed to specify a discrete multivariate Markov chain grows exponentially with the order and dimension of the chain. In this case the size of the database is not large enough for a consistent estimation of the model. We apply a strategy to estimate a multivariate process with an order greater than the order achieved using standard procedures. The new strategy consist on obtaining a partition of the state space which is constructed from a combination of the partitions corresponding to the three marginal processes, one for each band of energy, and the partition coming from to the multivariate Markov chain. Then, all the partitions are linked using a copula, in order to estimate the transition probabilities.

  15. Multi-category micro-milling tool wear monitoring with continuous hidden Markov models

    NASA Astrophysics Data System (ADS)

    Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon

    2009-02-01

    In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.

  16. From empirical data to time-inhomogeneous continuous Markov processes.

    PubMed

    Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G

    2016-03-01

    We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.

  17. Monitoring Farmland Loss Caused by Urbanization in Beijing from Modis Time Series Using Hierarchical Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.

    2018-04-01

    In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.

  18. Statistical inference for noisy nonlinear ecological dynamic systems.

    PubMed

    Wood, Simon N

    2010-08-26

    Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.

  19. Stochastic differential equation model for linear growth birth and death processes with immigration and emigration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granita, E-mail: granitafc@gmail.com; Bahar, A.

    This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.

  20. Protocol and practice in the adaptive management of waterfowl harvests

    USGS Publications Warehouse

    Johnson, F.; Williams, K.

    1999-01-01

    Waterfowl harvest management in North America, for all its success, historically has had several shortcomings, including a lack of well-defined objectives, a failure to account for uncertain management outcomes, and inefficient use of harvest regulations to understand the effects of management. To address these and other concerns, the U.S. Fish and Wildlife Service began implementation of adaptive harvest management in 1995. Harvest policies are now developed using a Markov decision process in which there is an explicit accounting for uncontrolled environmental variation, partial controllability of harvest, and structural uncertainty in waterfowl population dynamics. Current policies are passively adaptive, in the sense that any reduction in structural uncertainty is an unplanned by-product of the regulatory process. A generalization of the Markov decision process permits the calculation of optimal actively adaptive policies, but it is not yet clear how state-specific harvest actions differ between passive and active approaches. The Markov decision process also provides managers the ability to explore optimal levels of aggregation or "management scale" for regulating harvests in a system that exhibits high temporal, spatial, and organizational variability. Progress in institutionalizing adaptive harvest management has been remarkable, but some managers still perceive the process as a panacea, while failing to appreciate the challenges presented by this more explicit and methodical approach to harvest regulation. Technical hurdles include the need to develop better linkages between population processes and the dynamics of landscapes, and to model the dynamics of structural uncertainty in a more comprehensive fashion. From an institutional perspective, agreement on how to value and allocate harvests continues to be elusive, and there is some evidence that waterfowl managers have overestimated the importance of achievement-oriented factors in setting hunting regulations. Indeed, it is these unresolved value judgements, and the lack of an effective structure for organizing debate, that present the greatest threat to adaptive harvest management as a viable means for coping with management uncertainty. Copyright ?? 1999 by The Resilience Alliance.

  1. Information dynamics algorithm for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Massaro, Emanuele; Bagnoli, Franco; Guazzini, Andrea; Lió, Pietro

    2012-11-01

    The problem of community detection is relevant in many scientific disciplines, from social science to statistical physics. Given the impact of community detection in many areas, such as psychology and social sciences, we have addressed the issue of modifying existing well performing algorithms by incorporating elements of the domain application fields, i.e. domain-inspired. We have focused on a psychology and social network-inspired approach which may be useful for further strengthening the link between social network studies and mathematics of community detection. Here we introduce a community-detection algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method [4] by considering networks' nodes as agents capable to take decisions. In this framework we have introduced a memory factor to mimic a typical human behavior such as the oblivion effect. The method is based on information diffusion and it includes a non-linear processing phase. We test our method on two classical community benchmark and on computer generated networks with known community structure. Our approach has three important features: the capacity of detecting overlapping communities, the capability of identifying communities from an individual point of view and the fine tuning the community detectability with respect to prior knowledge of the data. Finally we discuss how to use a Shannon entropy measure for parameter estimation in complex networks.

  2. RECONSTRUCTING THREE-DIMENSIONAL JET GEOMETRY FROM TWO-DIMENSIONAL IMAGES

    NASA Astrophysics Data System (ADS)

    Avachat, Sayali; Perlman, Eric S.; Li, Kunyang; Kosak, Katie

    2018-01-01

    Relativistic jets in AGN are one of the most interesting and complex structures in the Universe. Some of the jets can be spread over hundreds of kilo parsecs from the central engine and display various bends, knots and hotspots. Observations of the jets can prove helpful in understanding the emission and particle acceleration processes from sub-arcsec to kilo parsec scales and the role of magnetic field in it. The M87 jet has many bright knots as well as regions of small and large bends. We attempt to model the jet geometry using the observed 2 dimensional structure. The radio and optical images of the jet show evidence of presence of helical magnetic field throughout. Using the observed structure in the sky frame, our goal is to gain an insight into the intrinsic 3 dimensional geometry in the jets frame. The structure of the bends in jet's frame may be quite different than what we see in the sky frame. The knowledge of the intrinsic structure will be helpful in understanding the appearance of the magnetic field and hence polarization morphology. To achieve this, we are using numerical methods to solve the non-linear equations based on the jet geometry. We are using the Log Likelihood method and algorithm based on Markov Chain Monte Carlo (MCMC) simulations.

  3. Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.

    PubMed

    Salis, Howard; Kaznessis, Yiannis

    2005-02-01

    The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.

  4. Evaluating reliability of WSN with sleep/wake-up interfering nodes

    NASA Astrophysics Data System (ADS)

    Distefano, Salvatore

    2013-10-01

    A wireless sensor network (WSN) (singular and plural of acronyms are spelled the same) is a distributed system composed of autonomous sensor nodes wireless connected and randomly scattered into a geographical area to cooperatively monitor physical or environmental conditions. Adequate techniques and strategies are required to manage a WSN so that it works properly, observing specific quantities and metrics to evaluate the WSN operational conditions. Among them, one of the most important is the reliability. Considering a WSN as a system composed of sensor nodes the system reliability approach can be applied, thus expressing the WSN reliability in terms of its nodes' reliability. More specifically, since often standby power management policies are applied at node level and interferences among nodes may arise, a WSN can be considered as a dynamic system. In this article we therefore consider the WSN reliability evaluation problem from the dynamic system reliability perspective. Static-structural interactions are specified by the WSN topology. Sleep/wake-up standby policies and interferences due to wireless communications can be instead considered as dynamic aspects. Thus, in order to represent and to evaluate the WSN reliability, we use dynamic reliability block diagrams and Petri nets. The proposed technique allows to overcome the limits of Markov models when considering non-linear discharge processes, since they cannot adequately represent the aging processes. In order to demonstrate the effectiveness of the technique, we investigate some specific WSN network topologies, providing guidelines for their representation and evaluation.

  5. Identifying and correcting non-Markov states in peptide conformational dynamics

    NASA Astrophysics Data System (ADS)

    Nerukh, Dmitry; Jensen, Christian H.; Glen, Robert C.

    2010-02-01

    Conformational transitions in proteins define their biological activity and can be investigated in detail using the Markov state model. The fundamental assumption on the transitions between the states, their Markov property, is critical in this framework. We test this assumption by analyzing the transitions obtained directly from the dynamics of a molecular dynamics simulated peptide valine-proline-alanine-leucine and states defined phenomenologically using clustering in dihedral space. We find that the transitions are Markovian at the time scale of ≈50 ps and longer. However, at the time scale of 30-40 ps the dynamics loses its Markov property. Our methodology reveals the mechanism that leads to non-Markov behavior. It also provides a way of regrouping the conformations into new states that now possess the required Markov property of their dynamics.

  6. All-Optical Control of Linear and Nonlinear Energy Transfer via the Zeno Effect

    NASA Astrophysics Data System (ADS)

    Guo, Xiang; Zou, Chang-Ling; Jiang, Liang; Tang, Hong X.

    2018-05-01

    Microresonator-based nonlinear processes are fundamental to applications including microcomb generation, parametric frequency conversion, and harmonics generation. While nonlinear processes involving either second- (χ(2 )) or third- (χ(3 )) order nonlinearity have been extensively studied, the interaction between these two basic nonlinear processes has seldom been reported. In this paper we demonstrate a coherent interplay between second- and third- order nonlinear processes. The parametric (χ(2 ) ) coupling to a lossy ancillary mode shortens the lifetime of the target photonic mode and suppresses its density of states, preventing the photon emissions into the target photonic mode via the Zeno effect. Such an effect is then used to control the stimulated four-wave mixing process and realize a suppression ratio of 34.5.

  7. Probabilistic Reasoning Over Seismic Time Series: Volcano Monitoring by Hidden Markov Models at Mt. Etna

    NASA Astrophysics Data System (ADS)

    Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea; Montalto, Placido; Patanè, Domenico; Privitera, Eugenio

    2016-07-01

    From January 2011 to December 2015, Mt. Etna was mainly characterized by a cyclic eruptive behavior with more than 40 lava fountains from New South-East Crater. Using the RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area, an automatic recognition of the different states of volcanic activity (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN) has been applied for monitoring purposes. Since values of the RMS time series calculated on the seismic signal are generated from a stochastic process, we can try to model the system generating its sampled values, assumed to be a Markov process, using Hidden Markov Models (HMMs). HMMs analysis seeks to recover the sequence of hidden states from the observations. In our framework, observations are characters generated by the Symbolic Aggregate approXimation (SAX) technique, which maps RMS time series values with symbols of a pre-defined alphabet. The main advantages of the proposed framework, based on HMMs and SAX, with respect to other automatic systems applied on seismic signals at Mt. Etna, are the use of multiple stations and static thresholds to well characterize the volcano states. Its application on a wide seismic dataset of Etna volcano shows the possibility to guess the volcano states. The experimental results show that, in most of the cases, we detected lava fountains in advance.

  8. Nonlinear Real-Time Optical Signal Processing.

    DTIC Science & Technology

    1981-06-30

    bandwidth and space-bandwidth products. Real-time homonorphic and loga- rithmic filtering by halftone nonlinear processing has been achieved. A...Page ABSTRACT 1 1. RESEARCH OBJECTIVES AND PROGRESS 3 I-- 1.1 Introduction and Project overview 3 1.2 Halftone Processing 9 1.3 Direct Nonlinear...time homomorphic and logarithmic filtering by halftone nonlinear processing has been achieved. A detailed analysis of degradation due to the finite gamma

  9. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zuwei; Zhao, Haibo, E-mail: klinsmannzhb@163.com; Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule providesmore » a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are demonstrated in a physically realistic Brownian coagulation case. The computational accuracy is validated with benchmark solution of discrete-sectional method. The simulation results show that the comprehensive approach can attain very favorable improvement in cost without sacrificing computational accuracy.« less

  10. Sub- and super-diffusion on Cantor sets: Beyond the paradox

    NASA Astrophysics Data System (ADS)

    K. Golmankhaneh, Alireza; Balankin, Alexander S.

    2018-04-01

    There is no way to build a nontrivial Markov process having continuous trajectories on a totally disconnected fractal embedded in the Euclidean space. Accordingly, in order to delineate the diffusion process on the totally disconnected fractal, one needs to relax the continuum requirement. Consequently, a diffusion process depends on how the continuum requirement is handled. This explains the emergence of different types of anomalous diffusion on the same totally disconnected set. In this regard, we argue that the number of effective spatial degrees of freedom of a random walker on the totally disconnected Cantor set is equal to nsp = [ D ] + 1, where [ D ] is the integer part of the Hausdorff dimension of the Cantor set. Conversely, the number of effective dynamical degrees of freedom (ds) depends on the definition of a Markov process on the totally disconnected Cantor set embedded in the Euclidean space En (n ≥nsp). This allows us to deduce the equation of diffusion by employing the local differential operators on the Fα-support. The exact solutions of this equation are obtained on the middle-ɛ Cantor sets for different kinds of the Markovian random processes. The relation of our findings to physical phenomena observed in complex systems is highlighted.

  11. Post processing of optically recognized text via second order hidden Markov model

    NASA Astrophysics Data System (ADS)

    Poudel, Srijana

    In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.

  12. Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)

    NASA Astrophysics Data System (ADS)

    Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.

    2018-05-01

    A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.

  13. Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks

    NASA Astrophysics Data System (ADS)

    Johnson, Joseph

    2016-03-01

    We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called ``metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a ``supernet'' of all numerical information supporting new initiatives in AI.

  14. Eternal non-Markovianity: from random unitary to Markov chain realisations.

    PubMed

    Megier, Nina; Chruściński, Dariusz; Piilo, Jyrki; Strunz, Walter T

    2017-07-25

    The theoretical description of quantum dynamics in an intriguing way does not necessarily imply the underlying dynamics is indeed intriguing. Here we show how a known very interesting master equation with an always negative decay rate [eternal non-Markovianity (ENM)] arises from simple stochastic Schrödinger dynamics (random unitary dynamics). Equivalently, it may be seen as arising from a mixture of Markov (semi-group) open system dynamics. Both these approaches lead to a more general family of CPT maps, characterized by a point within a parameter triangle. Our results show how ENM quantum dynamics can be realised easily in the laboratory. Moreover, we find a quantum time-continuously measured (quantum trajectory) realisation of the dynamics of the ENM master equation based on unitary transformations and projective measurements in an extended Hilbert space, guided by a classical Markov process. Furthermore, a Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) representation of the dynamics in an extended Hilbert space can be found, with a remarkable property: there is no dynamics in the ancilla state. Finally, analogous constructions for two qubits extend these results from non-CP-divisible to non-P-divisible dynamics.

  15. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    PubMed Central

    Chevalier, Michael W.; El-Samad, Hana

    2014-01-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled. PMID:25481130

  16. Markov chain aggregation and its applications to combinatorial reaction networks.

    PubMed

    Ganguly, Arnab; Petrov, Tatjana; Koeppl, Heinz

    2014-09-01

    We consider a continuous-time Markov chain (CTMC) whose state space is partitioned into aggregates, and each aggregate is assigned a probability measure. A sufficient condition for defining a CTMC over the aggregates is presented as a variant of weak lumpability, which also characterizes that the measure over the original process can be recovered from that of the aggregated one. We show how the applicability of de-aggregation depends on the initial distribution. The application section is devoted to illustrate how the developed theory aids in reducing CTMC models of biochemical systems particularly in connection to protein-protein interactions. We assume that the model is written by a biologist in form of site-graph-rewrite rules. Site-graph-rewrite rules compactly express that, often, only a local context of a protein (instead of a full molecular species) needs to be in a certain configuration in order to trigger a reaction event. This observation leads to suitable aggregate Markov chains with smaller state spaces, thereby providing sufficient reduction in computational complexity. This is further exemplified in two case studies: simple unbounded polymerization and early EGFR/insulin crosstalk.

  17. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    NASA Astrophysics Data System (ADS)

    Chevalier, Michael W.; El-Samad, Hana

    2014-12-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.

  18. Birth/death process model

    NASA Technical Reports Server (NTRS)

    Solloway, C. B.; Wakeland, W.

    1976-01-01

    First-order Markov model developed on digital computer for population with specific characteristics. System is user interactive, self-documenting, and does not require user to have complete understanding of underlying model details. Contains thorough error-checking algorithms on input and default capabilities.

  19. Phylogenetic mixtures and linear invariants for equal input models.

    PubMed

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  20. All-optical regenerator of multi-channel signals.

    PubMed

    Li, Lu; Patki, Pallavi G; Kwon, Young B; Stelmakh, Veronika; Campbell, Brandon D; Annamalai, Muthiah; Lakoba, Taras I; Vasilyev, Michael

    2017-10-12

    One of the main reasons why nonlinear-optical signal processing (regeneration, logic, etc.) has not yet become a practical alternative to electronic processing is that the all-optical elements with nonlinear input-output relationship have remained inherently single-channel devices (just like their electronic counterparts) and, hence, cannot fully utilise the parallel processing potential of optical fibres and amplifiers. The nonlinear input-output transfer function requires strong optical nonlinearity, e.g. self-phase modulation, which, for fundamental reasons, is always accompanied by cross-phase modulation and four-wave mixing. In processing multiple wavelength-division-multiplexing channels, large cross-phase modulation and four-wave mixing crosstalks among the channels destroy signal quality. Here we describe a solution to this problem: an optical signal processor employing a group-delay-managed nonlinear medium where strong self-phase modulation is achieved without such nonlinear crosstalk. We demonstrate, for the first time to our knowledge, simultaneous all-optical regeneration of up to 16 wavelength-division-multiplexing channels by one device. This multi-channel concept can be extended to other nonlinear-optical processing schemes.Nonlinear optical processing devices are not yet fully practical as they are single channel. Here the authors demonstrate all-optical regeneration of up to 16 channels by one device, employing a group-delay-managed nonlinear medium where strong self-phase modulation is achieved without nonlinear inter-channel crosstalk.

  1. Nonlinear dynamics that appears in the dynamical model of drying process of a polymer solution coated on a flat substrate

    NASA Astrophysics Data System (ADS)

    Kagami, Hiroyuki

    2007-01-01

    We have proposed and modified the dynamical model of drying process of polymer solution coated on a flat substrate for flat polymer film fabrication and have presented the fruits through some meetings and so on. Though basic equations of the dynamical model have characteristic nonlinearity, character of the nonlinearity has not been studied enough yet. In this paper, at first, we derive nonlinear equations from the dynamical model of drying process of polymer solution. Then we introduce results of numerical simulations of the nonlinear equations and consider roles of various parameters. Some of them are indirectly concerned in strength of non-equilibriumity. Through this study, we approach essential qualities of nonlinearity in non-equilibrium process of drying process.

  2. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    PubMed

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  3. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  4. Algorithms for Discovery of Multiple Markov Boundaries

    PubMed Central

    Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.

    2013-01-01

    Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052

  5. Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains

    PubMed Central

    Meyer, Denny; Forbes, Don; Clarke, Stephen R.

    2006-01-01

    Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key Points A comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition. The Markov assumption appears to be valid. However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play. Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes. PMID:24357946

  6. Probability distributions for Markov chain based quantum walks

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  7. Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains.

    PubMed

    Meyer, Denny; Forbes, Don; Clarke, Stephen R

    2006-01-01

    Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key PointsA comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition.The Markov assumption appears to be valid.However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play.Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes.

  8. Stratification of the phase clouds and statistical effects of the non-Markovity in chaotic time series of human gait for healthy people and Parkinson patients

    NASA Astrophysics Data System (ADS)

    Yulmetyev, Renat; Demin, Sergey; Emelyanova, Natalya; Gafarov, Fail; Hänggi, Peter

    2003-03-01

    In this work we develop a new method of diagnosing the nervous system diseases and a new approach in studying human gait dynamics with the help of the theory of discrete non-Markov random processes (Phys. Rev. E 62 (5) (2000) 6178, Phys. Rev. E 64 (2001) 066132, Phys. Rev. E 65 (2002) 046107, Physica A 303 (2002) 427). The stratification of the phase clouds and the statistical non-Markov effects in the time series of the dynamics of human gait are considered. We carried out the comparative analysis of the data of four age groups of healthy people: children (from 3 to 10 year olds), teenagers (from 11 to 14 year olds), young people (from 21 up to 29 year olds), elderly persons (from 71 to 77 year olds) and Parkinson patients. The full data set are analyzed with the help of the phase portraits of the four dynamic variables, the power spectra of the initial time correlation function and the memory functions of junior orders, the three first points in the spectra of the statistical non-Markov parameter. The received results allow to define the predisposition of the probationers to deflections in the central nervous system caused by Parkinson's disease. We have found out distinct differences between the five submitted groups. On this basis we offer a new method of diagnostics and forecasting Parkinson's disease.

  9. Regenerative Simulation of Harris Recurrent Markov Chains.

    DTIC Science & Technology

    1982-07-01

    Sutijle) S. TYPE OF REPORT A PERIOD COVERED REGENERATIVE SIMULATION OF HARRIS RECURRENT Technical Report MARKOV CHAINS 14. PERFORMING ORG. REPORT NUMBER...7 AD-Ag 251 STANFORD UNIV CA DEPT OF OPERATIONS RESEARCH /s i2/ REGENERATIVE SIMULATION OF HARRIS RECURRENT MARKOV CHAINS,(U) JUL 82 P W GLYNN N0001...76-C-0578 UNtLASSIFIED TR-62 NL EhhhIhEEEEEEI EEEEEIIIIIII REGENERATIVE SIMULATION OF HARRIS RECURRENT MARKOV CHAINS by Peter W. Glynn TECHNICAL

  10. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  11. Preliminary testing for the Markov property of the fifteen chromatin states of the Broad Histone Track.

    PubMed

    Lee, Kyung-Eun; Park, Hyun-Seok

    2015-01-01

    Epigenetic computational analyses based on Markov chains can integrate dependencies between regions in the genome that are directly adjacent. In this paper, the BED files of fifteen chromatin states of the Broad Histone Track of the ENCODE project are parsed, and comparative nucleotide frequencies of regional chromatin blocks are thoroughly analyzed to detect the Markov property in them. We perform various tests to examine the Markov property embedded in a frequency domain by checking for the presence of the Markov property in the various chromatin states. We apply these tests to each region of the fifteen chromatin states. The results of our simulation indicate that some of the chromatin states possess a stronger Markov property than others. We discuss the significance of our findings in statistical models of nucleotide sequences that are necessary for the computational analysis of functional units in noncoding DNA.

  12. Slow diffusion by Markov random flights

    NASA Astrophysics Data System (ADS)

    Kolesnik, Alexander D.

    2018-06-01

    We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.

  13. Bidirectional Classical Stochastic Processes with Measurements and Feedback

    NASA Technical Reports Server (NTRS)

    Hahne, G. E.

    2005-01-01

    A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.

  14. Sensitivity to gaze-contingent contrast increments in naturalistic movies: An exploratory report and model comparison

    PubMed Central

    Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.

    2015-01-01

    Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546

  15. Cosmological constraints from the CFHTLenS shear measurements using a new, accurate, and flexible way of predicting non-linear mass clustering

    NASA Astrophysics Data System (ADS)

    Angulo, Raul E.; Hilbert, Stefan

    2015-03-01

    We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.

  16. Kinetics of CO2 diffusion in human carbonic anhydrase: a study using molecular dynamics simulations and the Markov-state model.

    PubMed

    Chen, Gong; Kong, Xian; Lu, Diannan; Wu, Jianzhong; Liu, Zheng

    2017-05-10

    Molecular dynamics (MD) simulations, in combination with the Markov-state model (MSM), were applied to probe CO 2 diffusion from an aqueous solution into the active site of human carbonic anhydrase II (hCA-II), an enzyme useful for enhanced CO 2 capture and utilization. The diffusion process in the hydrophobic pocket of hCA-II was illustrated in terms of a two-dimensional free-energy landscape. We found that CO 2 diffusion in hCA-II is a rate-limiting step in the CO 2 diffusion-binding-reaction process. The equilibrium distribution of CO 2 shows its preferential accumulation within a hydrophobic domain in the protein core region. An analysis of the committors and reactive fluxes indicates that the main pathway for CO 2 diffusion into the active site of hCA-II is through a binding pocket where residue Gln 136 contributes to the maximal flux. The simulation results offer a new perspective on the CO 2 hydration kinetics and useful insights toward the development of novel biochemical processes for more efficient CO 2 sequestration and utilization.

  17. Reduced-Density-Matrix Description of Decoherence and Relaxation Processes for Electron-Spin Systems

    NASA Astrophysics Data System (ADS)

    Jacobs, Verne

    2017-04-01

    Electron-spin systems are investigated using a reduced-density-matrix description. Applications of interest include trapped atomic systems in optical lattices, semiconductor quantum dots, and vacancy defect centers in solids. Complimentary time-domain (equation-of-motion) and frequency-domain (resolvent-operator) formulations are self-consistently developed. The general non-perturbative and non-Markovian formulations provide a fundamental framework for systematic evaluations of corrections to the standard Born (lowest-order-perturbation) and Markov (short-memory-time) approximations. Particular attention is given to decoherence and relaxation processes, as well as spectral-line broadening phenomena, that are induced by interactions with photons, phonons, nuclear spins, and external electric and magnetic fields. These processes are treated either as coherent interactions or as environmental interactions. The environmental interactions are incorporated by means of the general expressions derived for the time-domain and frequency-domain Liouville-space self-energy operators, for which the tetradic-matrix elements are explicitly evaluated in the diagonal-resolvent, lowest-order, and Markov (short-memory time) approximations. Work supported by the Office of Naval Research through the Basic Research Program at The Naval Research Laboratory.

  18. Hierarchical Bayesian modeling of ionospheric TEC disturbances as non-stationary processes

    NASA Astrophysics Data System (ADS)

    Seid, Abdu Mohammed; Berhane, Tesfahun; Roininen, Lassi; Nigussie, Melessew

    2018-03-01

    We model regular and irregular variation of ionospheric total electron content as stationary and non-stationary processes, respectively. We apply the method developed to SCINDA GPS data set observed at Bahir Dar, Ethiopia (11.6 °N, 37.4 °E) . We use hierarchical Bayesian inversion with Gaussian Markov random process priors, and we model the prior parameters in the hyperprior. We use Matérn priors via stochastic partial differential equations, and use scaled Inv -χ2 hyperpriors for the hyperparameters. For drawing posterior estimates, we use Markov Chain Monte Carlo methods: Gibbs sampling and Metropolis-within-Gibbs for parameter and hyperparameter estimations, respectively. This allows us to quantify model parameter estimation uncertainties as well. We demonstrate the applicability of the method proposed using a synthetic test case. Finally, we apply the method to real GPS data set, which we decompose to regular and irregular variation components. The result shows that the approach can be used as an accurate ionospheric disturbance characterization technique that quantifies the total electron content variability with corresponding error uncertainties.

  19. Using Markov Models of Fault Growth Physics and Environmental Stresses to Optimize Control Actions

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    A generalized Markov chain representation of fault dynamics is presented for the case that available modeling of fault growth physics and future environmental stresses can be represented by two independent stochastic process models. A contrived but representatively challenging example will be presented and analyzed, in which uncertainty in the modeling of fault growth physics is represented by a uniformly distributed dice throwing process, and a discrete random walk is used to represent uncertain modeling of future exogenous loading demands to be placed on the system. A finite horizon dynamic programming algorithm is used to solve for an optimal control policy over a finite time window for the case that stochastic models representing physics of failure and future environmental stresses are known, and the states of both stochastic processes are observable by implemented control routines. The fundamental limitations of optimization performed in the presence of uncertain modeling information are examined by comparing the outcomes obtained from simulations of an optimizing control policy with the outcomes that would be achievable if all modeling uncertainties were removed from the system.

  20. Hierarchical Markov blankets and adaptive active inference. Comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Désormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Kirchhoff, Michael

    2018-03-01

    Ramstead MJD, Badcock PB, Friston KJ. Answering Schrödinger's question: A free-energy formulation. Phys Life Rev 2018. https://doi.org/10.1016/j.plrev.2017.09.001 [this issue] motivate a multiscale characterisation of living systems in terms of hierarchically structured Markov blankets - a view of living systems as comprised of Markov blankets of Markov blankets [1-4]. It is effectively a treatment of what life is and how it is realised, cast in terms of how Markov blankets of living systems self-organise via active inference - a corollary of the free energy principle [5-7].

  1. Modeling Hubble Space Telescope flight data by Q-Markov cover identification

    NASA Technical Reports Server (NTRS)

    Liu, K.; Skelton, R. E.; Sharkey, J. P.

    1992-01-01

    A state space model for the Hubble Space Telescope under the influence of unknown disturbances in orbit is presented. This model was obtained from flight data by applying the Q-Markov covariance equivalent realization identification algorithm. This state space model guarantees the match of the first Q-Markov parameters and covariance parameters of the Hubble system. The flight data were partitioned into high- and low-frequency components for more efficient Q-Markov cover modeling, to reduce some computational difficulties of the Q-Markov cover algorithm. This identification revealed more than 20 lightly damped modes within the bandwidth of the attitude control system. Comparisons with the analytical (TREETOPS) model are also included.

  2. Angular-Rate Estimation Using Delayed Quaternion Measurements

    NASA Technical Reports Server (NTRS)

    Azor, R.; Bar-Itzhack, I. Y.; Harman, R. R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared one that uses differentiated quaternion measurements to yield coarse rate measurements, which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear part of the rotas rotational dynamics equation of a body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This non unique decomposition, enables the treatment of the nonlinear spacecraft (SC) dynamics model as a linear one and, thus, the application of a PseudoLinear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the gain matrix and thus eliminates the need to compute recursively the filter covariance matrix. The replacement of the rotational dynamics by a simple Markov model is also examined. In this paper special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results are presented.

  3. Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo

    NASA Astrophysics Data System (ADS)

    Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik

    2018-05-01

    Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.

  4. Path lumping: An efficient algorithm to identify metastable path channels for conformational dynamics of multi-body systems

    NASA Astrophysics Data System (ADS)

    Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui

    2017-07-01

    Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.

  5. Retrospective estimation of breeding phenology of American Goldfinch (Carduelis tristis) using pattern oriented modeling

    EPA Science Inventory

    Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...

  6. One Hundred Ways to be Non-Fickian - A Rigorous Multi-Variate Statistical Analysis of Pore-Scale Transport

    NASA Astrophysics Data System (ADS)

    Most, Sebastian; Nowak, Wolfgang; Bijeljic, Branko

    2015-04-01

    Fickian transport in groundwater flow is the exception rather than the rule. Transport in porous media is frequently simulated via particle methods (i.e. particle tracking random walk (PTRW) or continuous time random walk (CTRW)). These methods formulate transport as a stochastic process of particle position increments. At the pore scale, geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Hence, it is important to get a better understanding of the processes at pore scale. For our analysis we track the positions of 10.000 particles migrating through the pore space over time. The data we use come from micro CT scans of a homogeneous sandstone and encompass about 10 grain sizes. Based on those images we discretize the pore structure and simulate flow at the pore scale based on the Navier-Stokes equation. This flow field realistically describes flow inside the pore space and we do not need to add artificial dispersion during the transport simulation. Next, we use particle tracking random walk and simulate pore-scale transport. Finally, we use the obtained particle trajectories to do a multivariate statistical analysis of the particle motion at the pore scale. Our analysis is based on copulas. Every multivariate joint distribution is a combination of its univariate marginal distributions. The copula represents the dependence structure of those univariate marginals and is therefore useful to observe correlation and non-Gaussian interactions (i.e. non-Fickian transport). The first goal of this analysis is to better understand the validity regions of commonly made assumptions. We are investigating three different transport distances: 1) The distance where the statistical dependence between particle increments can be modelled as an order-one Markov process. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks start. 2) The distance where bivariate statistical dependence simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW/CTRW). 3) The distance of complete statistical independence (validity of classical PTRW/CTRW). The second objective is to reveal characteristic dependencies influencing transport the most. Those dependencies can be very complex. Copulas are highly capable of representing linear dependence as well as non-linear dependence. With that tool we are able to detect persistent characteristics dominating transport even across different scales. The results derived from our experimental data set suggest that there are many more non-Fickian aspects of pore-scale transport than the univariate statistics of longitudinal displacements. Non-Fickianity can also be found in transverse displacements, and in the relations between increments at different time steps. Also, the found dependence is non-linear (i.e. beyond simple correlation) and persists over long distances. Thus, our results strongly support the further refinement of techniques like correlated PTRW or correlated CTRW towards non-linear statistical relations.

  7. A Unified Framework for Complex Networks with Degree Trichotomy Based on Markov Chains.

    PubMed

    Hui, David Shui Wing; Chen, Yi-Chao; Zhang, Gong; Wu, Weijie; Chen, Guanrong; Lui, John C S; Li, Yingtao

    2017-06-16

    This paper establishes a Markov chain model as a unified framework for describing the evolution processes in complex networks. The unique feature of the proposed model is its capability in addressing the formation mechanism that can reflect the "trichotomy" observed in degree distributions, based on which closed-form solutions can be derived. Important special cases of the proposed unified framework are those classical models, including Poisson, Exponential, Power-law distributed networks. Both simulation and experimental results demonstrate a good match of the proposed model with real datasets, showing its superiority over the classical models. Implications of the model to various applications including citation analysis, online social networks, and vehicular networks design, are also discussed in the paper.

  8. Stochastic Modeling based on Dictionary Approach for the Generation of Daily Precipitation Occurrences

    NASA Astrophysics Data System (ADS)

    Panu, U. S.; Ng, W.; Rasmussen, P. F.

    2009-12-01

    The modeling of weather states (i.e., precipitation occurrences) is critical when the historical data are not long enough for the desired analysis. Stochastic models (e.g., Markov Chain and Alternating Renewal Process (ARP)) of the precipitation occurrence processes generally assume the existence of short-term temporal-dependency between the neighboring states while implying the existence of long-term independency (randomness) of states in precipitation records. Existing temporal-dependent models for the generation of precipitation occurrences are restricted either by the fixed-length memory (e.g., the order of a Markov chain model), or by the reining states in segments (e.g., persistency of homogenous states within dry/wet-spell lengths of an ARP). The modeling of variable segment lengths and states could be an arduous task and a flexible modeling approach is required for the preservation of various segmented patterns of precipitation data series. An innovative Dictionary approach has been developed in the field of genome pattern recognition for the identification of frequently occurring genome segments in DNA sequences. The genome segments delineate the biologically meaningful ``words" (i.e., segments with a specific patterns in a series of discrete states) that can be jointly modeled with variable lengths and states. A meaningful “word”, in hydrology, can be referred to a segment of precipitation occurrence comprising of wet or dry states. Such flexibility would provide a unique advantage over the traditional stochastic models for the generation of precipitation occurrences. Three stochastic models, namely, the alternating renewal process using Geometric distribution, the second-order Markov chain model, and the Dictionary approach have been assessed to evaluate their efficacy for the generation of daily precipitation sequences. Comparisons involved three guiding principles namely (i) the ability of models to preserve the short-term temporal-dependency in data through the concepts of autocorrelation, average mutual information, and Hurst exponent, (ii) the ability of models to preserve the persistency within the homogenous dry/wet weather states through analysis of dry/wet-spell lengths between the observed and generated data, and (iii) the ability to assesses the goodness-of-fit of models through the likelihood estimates (i.e., AIC and BIC). Past 30 years of observed daily precipitation records from 10 Canadian meteorological stations were utilized for comparative analyses of the three models. In general, the Markov chain model performed well. The remainders of the models were found to be competitive from one another depending upon the scope and purpose of the comparison. Although the Markov chain model has a certain advantage in the generation of daily precipitation occurrences, the structural flexibility offered by the Dictionary approach in modeling the varied segment lengths of heterogeneous weather states provides a distinct and powerful advantage in the generation of precipitation sequences.

  9. Combining experimental and simulation data of molecular processes via augmented Markov models.

    PubMed

    Olsson, Simon; Wu, Hao; Paul, Fabian; Clementi, Cecilia; Noé, Frank

    2017-08-01

    Accurate mechanistic description of structural changes in biomolecules is an increasingly important topic in structural and chemical biology. Markov models have emerged as a powerful way to approximate the molecular kinetics of large biomolecules while keeping full structural resolution in a divide-and-conquer fashion. However, the accuracy of these models is limited by that of the force fields used to generate the underlying molecular dynamics (MD) simulation data. Whereas the quality of classical MD force fields has improved significantly in recent years, remaining errors in the Boltzmann weights are still on the order of a few [Formula: see text], which may lead to significant discrepancies when comparing to experimentally measured rates or state populations. Here we take the view that simulations using a sufficiently good force-field sample conformations that are valid but have inaccurate weights, yet these weights may be made accurate by incorporating experimental data a posteriori. To do so, we propose augmented Markov models (AMMs), an approach that combines concepts from probability theory and information theory to consistently treat systematic force-field error and statistical errors in simulation and experiment. Our results demonstrate that AMMs can reconcile conflicting results for protein mechanisms obtained by different force fields and correct for a wide range of stationary and dynamical observables even when only equilibrium measurements are incorporated into the estimation process. This approach constitutes a unique avenue to combine experiment and computation into integrative models of biomolecular structure and dynamics.

  10. LECTURES ON GAME THEORY, MARKOV CHAINS, AND RELATED TOPICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, G L

    1958-03-01

    Notes on nine lectures delivered at Sandin Corporation in August 1957 are given. Part one contains the manuscript of a paper concerning a judging problem. Part two is concerned with finite Markov-chain theory amd discusses regular Markov chains, absorbing Markov chains, the classification of states, application to the Leontief input-output model, and semimartingales. Part three contains notes on game theory and covers matrix games, the effect of psychological attitudes on the outcomes of games, extensive games, amd matrix theory applied to mathematical economics. (auth)

  11. Markov chains: computing limit existence and approximations with DNA.

    PubMed

    Cardona, M; Colomer, M A; Conde, J; Miret, J M; Miró, J; Zaragoza, A

    2005-09-01

    We present two algorithms to perform computations over Markov chains. The first one determines whether the sequence of powers of the transition matrix of a Markov chain converges or not to a limit matrix. If it does converge, the second algorithm enables us to estimate this limit. The combination of these algorithms allows the computation of a limit using DNA computing. In this sense, we have encoded the states and the transition probabilities using strands of DNA for generating paths of the Markov chain.

  12. Markov models in dentistry: application to resin-bonded bridges and review of the literature.

    PubMed

    Mahl, Dominik; Marinello, Carlo P; Sendi, Pedram

    2012-10-01

    Markov models are mathematical models that can be used to describe disease progression and evaluate the cost-effectiveness of medical interventions. Markov models allow projecting clinical and economic outcomes into the future and are therefore frequently used to estimate long-term outcomes of medical interventions. The purpose of this paper is to demonstrate its use in dentistry, using the example of resin-bonded bridges to replace missing teeth, and to review the literature. We used literature data and a four-state Markov model to project long-term outcomes of resin-bonded bridges over a time horizon of 60 years. In addition, the literature was searched in PubMed Medline for research articles on the application of Markov models in dentistry.

  13. The generalization ability of SVM classification based on Markov sampling.

    PubMed

    Xu, Jie; Tang, Yuan Yan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang; Zhang, Baochang

    2015-06-01

    The previously known works studying the generalization ability of support vector machine classification (SVMC) algorithm are usually based on the assumption of independent and identically distributed samples. In this paper, we go far beyond this classical framework by studying the generalization ability of SVMC based on uniformly ergodic Markov chain (u.e.M.c.) samples. We analyze the excess misclassification error of SVMC based on u.e.M.c. samples, and obtain the optimal learning rate of SVMC for u.e.M.c. We also introduce a new Markov sampling algorithm for SVMC to generate u.e.M.c. samples from given dataset, and present the numerical studies on the learning performance of SVMC based on Markov sampling for benchmark datasets. The numerical studies show that the SVMC based on Markov sampling not only has better generalization ability as the number of training samples are bigger, but also the classifiers based on Markov sampling are sparsity when the size of dataset is bigger with regard to the input dimension.

  14. Manipulating acoustic wave reflection by a nonlinear elastic metasurface

    NASA Astrophysics Data System (ADS)

    Guo, Xinxin; Gusev, Vitalyi E.; Bertoldi, Katia; Tournat, Vincent

    2018-03-01

    The acoustic wave reflection properties of a nonlinear elastic metasurface, derived from resonant nonlinear elastic elements, are theoretically and numerically studied. The metasurface is composed of a two degree-of-freedom mass-spring system with quadratic elastic nonlinearity. The possibility of converting, during the reflection process, most of the fundamental incoming wave energy into the second harmonic wave is shown, both theoretically and numerically, by means of a proper design of the nonlinear metasurface. The theoretical results from the harmonic balance method for a monochromatic source are compared with time domain simulations for a wave packet source. This protocol allows analyzing the dynamics of the nonlinear reflection process in the metasurface as well as exploring the limits of the operating frequency bandwidth. The reported methodology can be applied to a wide variety of nonlinear metasurfaces, thus possibly extending the family of exotic nonlinear reflection processes.

  15. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  16. Hidden Markov Models as a tool to measure pilot attention switching during simulated ILS approaches

    DOT National Transportation Integrated Search

    2003-04-14

    The pilot's instrument scanning data contain information about not only the pilot's eye movements, but also the pilot's : cognitive process during flight. However, it is often difficult to interpret the scanning data at the cognitive level : because:...

  17. Inference of epidemiological parameters from household stratified data

    PubMed Central

    Walker, James N.; Ross, Joshua V.

    2017-01-01

    We consider a continuous-time Markov chain model of SIR disease dynamics with two levels of mixing. For this so-called stochastic households model, we provide two methods for inferring the model parameters—governing within-household transmission, recovery, and between-household transmission—from data of the day upon which each individual became infectious and the household in which each infection occurred, as might be available from First Few Hundred studies. Each method is a form of Bayesian Markov Chain Monte Carlo that allows us to calculate a joint posterior distribution for all parameters and hence the household reproduction number and the early growth rate of the epidemic. The first method performs exact Bayesian inference using a standard data-augmentation approach; the second performs approximate Bayesian inference based on a likelihood approximation derived from branching processes. These methods are compared for computational efficiency and posteriors from each are compared. The branching process is shown to be a good approximation and remains computationally efficient as the amount of data is increased. PMID:29045456

  18. Coherent-Anomaly Method in Self-Avoiding Walk Problems

    NASA Astrophysics Data System (ADS)

    Hu, Xiao; Suzuki, Masuo

    Self-avoiding walk (SAW), being a nonequilibrium cooperative phenomenon, is investigated with a finite-order-restricted-walk (finite-ORW or FORW) coherent-anomaly method (CAM). The coefficient β1r in the asymptotic form Cnr ≃ βlrλn1r for the total number Cnr of r-ORW's with respect to the step number n is investigated for the first time. An asymptotic form for SAW's is thus obtained from the series of FORW approximants, Cnr ≃ brgμ(1 + a/r)n, as the envelope curve Cn ≃ b(ae/g)gμnng. Numerical results are given by Cn ≃ 1.424n0.27884.1507n and Cn ≃ 1.179n0.158710.005n for the plane triangular lattice and f.c.c. lattice, respectively. A good coincidence of the total numbers estimated from the above simple formulae with exact enumerations for finite-step SAW's implies that the essential nature of SAW (non-Markov process) can be understood from FORW (Markov process) in the CAM framework.

  19. A TWO-STATE MIXED HIDDEN MARKOV MODEL FOR RISKY TEENAGE DRIVING BEHAVIOR

    PubMed Central

    Jackson, John C.; Albert, Paul S.; Zhang, Zhiwei

    2016-01-01

    This paper proposes a joint model for longitudinal binary and count outcomes. We apply the model to a unique longitudinal study of teen driving where risky driving behavior and the occurrence of crashes or near crashes are measured prospectively over the first 18 months of licensure. Of scientific interest is relating the two processes and predicting crash and near crash outcomes. We propose a two-state mixed hidden Markov model whereby the hidden state characterizes the mean for the joint longitudinal crash/near crash outcomes and elevated g-force events which are a proxy for risky driving. Heterogeneity is introduced in both the conditional model for the count outcomes and the hidden process using a shared random effect. An estimation procedure is presented using the forward–backward algorithm along with adaptive Gaussian quadrature to perform numerical integration. The estimation procedure readily yields hidden state probabilities as well as providing for a broad class of predictors. PMID:27766124

  20. Detecting synchronization clusters in multivariate time series via coarse-graining of Markov chains.

    PubMed

    Allefeld, Carsten; Bialonski, Stephan

    2007-12-01

    Synchronization cluster analysis is an approach to the detection of underlying structures in data sets of multivariate time series, starting from a matrix R of bivariate synchronization indices. A previous method utilized the eigenvectors of R for cluster identification, analogous to several recent attempts at group identification using eigenvectors of the correlation matrix. All of these approaches assumed a one-to-one correspondence of dominant eigenvectors and clusters, which has however been shown to be wrong in important cases. We clarify the usefulness of eigenvalue decomposition for synchronization cluster analysis by translating the problem into the language of stochastic processes, and derive an enhanced clustering method harnessing recent insights from the coarse-graining of finite-state Markov processes. We illustrate the operation of our method using a simulated system of coupled Lorenz oscillators, and we demonstrate its superior performance over the previous approach. Finally we investigate the question of robustness of the algorithm against small sample size, which is important with regard to field applications.

  1. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  2. Novel probabilistic and distributed algorithms for guidance, control, and nonlinear estimation of large-scale multi-agent systems

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Saptarshi

    Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm guidance algorithms using results from numerical simulations and closed-loop hardware experiments on multiple quadrotors. In the second part of this dissertation, we present two novel discrete-time algorithms for distributed estimation, which track a single target using a network of heterogeneous sensing agents. The Distributed Bayesian Filtering (DBF) algorithm, the sensing agents combine their normalized likelihood functions using the logarithmic opinion pool and the discrete-time dynamic average consensus algorithm. Each agent's estimated likelihood function converges to an error ball centered on the joint likelihood function of the centralized multi-sensor Bayesian filtering algorithm. Using a new proof technique, the convergence, stability, and robustness properties of the DBF algorithm are rigorously characterized. The explicit bounds on the time step of the robust DBF algorithm are shown to depend on the time-scale of the target dynamics. Furthermore, the DBF algorithm for linear-Gaussian models can be cast into a modified form of the Kalman information filter. In the Bayesian Consensus Filtering (BCF) algorithm, the agents combine their estimated posterior pdfs multiple times within each time step using the logarithmic opinion pool scheme. Thus, each agent's consensual pdf minimizes the sum of Kullback-Leibler divergences with the local posterior pdfs. The performance and robust properties of these algorithms are validated using numerical simulations. In the third part of this dissertation, we present an attitude control strategy and a new nonlinear tracking controller for a spacecraft carrying a large object, such as an asteroid or a boulder. If the captured object is larger or comparable in size to the spacecraft and has significant modeling uncertainties, conventional nonlinear control laws that use exact feed-forward cancellation are not suitable because they exhibit a large resultant disturbance torque. The proposed nonlinear tracking control law guarantees global exponential convergence of tracking errors with finite-gain Lp stability in the presence of modeling uncertainties and disturbances, and reduces the resultant disturbance torque. Further, this control law permits the use of any attitude representation and its integral control formulation eliminates any constant disturbance. Under small uncertainties, the best strategy for stabilizing the combined system is to track a fuel-optimal reference trajectory using this nonlinear control law, because it consumes the least amount of fuel. In the presence of large uncertainties, the most effective strategy is to track the derivative plus proportional-derivative based reference trajectory, because it reduces the resultant disturbance torque. The effectiveness of the proposed attitude control law is demonstrated by using results of numerical simulation based on an Asteroid Redirect Mission concept. The new algorithms proposed in this dissertation will facilitate the development of versatile autonomous multi-agent systems that are capable of performing a variety of complex tasks in a robust and scalable manner.

  3. Design of nonlinear PID controller and nonlinear model predictive controller for a continuous stirred tank reactor.

    PubMed

    Prakash, J; Srinivasan, K

    2009-07-01

    In this paper, the authors have represented the nonlinear system as a family of local linear state space models, local PID controllers have been designed on the basis of linear models, and the weighted sum of the output from the local PID controllers (Nonlinear PID controller) has been used to control the nonlinear process. Further, Nonlinear Model Predictive Controller using the family of local linear state space models (F-NMPC) has been developed. The effectiveness of the proposed control schemes has been demonstrated on a CSTR process, which exhibits dynamic nonlinearity.

  4. Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen

    2017-10-01

    Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.

  5. Modeling Driver Behavior near Intersections in Hidden Markov Model

    PubMed Central

    Li, Juan; He, Qinglian; Zhou, Hang; Guan, Yunlin; Dai, Wei

    2016-01-01

    Intersections are one of the major locations where safety is a big concern to drivers. Inappropriate driver behaviors in response to frequent changes when approaching intersections often lead to intersection-related crashes or collisions. Thus to better understand driver behaviors at intersections, especially in the dilemma zone, a Hidden Markov Model (HMM) is utilized in this study. With the discrete data processing, the observed dynamic data of vehicles are used for the inference of the Hidden Markov Model. The Baum-Welch (B-W) estimation algorithm is applied to calculate the vehicle state transition probability matrix and the observation probability matrix. When combined with the Forward algorithm, the most likely state of the driver can be obtained. Thus the model can be used to measure the stability and risk of driver behavior. It is found that drivers’ behaviors in the dilemma zone are of lower stability and higher risk compared with those in other regions around intersections. In addition to the B-W estimation algorithm, the Viterbi Algorithm is utilized to predict the potential dangers of vehicles. The results can be applied to driving assistance systems to warn drivers to avoid possible accidents. PMID:28009838

  6. Advanced techniques in reliability model representation and solution

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  7. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chevalier, Michael W., E-mail: Michael.Chevalier@ucsf.edu; El-Samad, Hana, E-mail: Hana.El-Samad@ucsf.edu

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation timesmore » of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.« less

  8. Multiframe video coding for improved performance over wireless channels.

    PubMed

    Budagavi, M; Gibson, J D

    2001-01-01

    We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.

  9. Poisson-Gaussian Noise Reduction Using the Hidden Markov Model in Contourlet Domain for Fluorescence Microscopy Images

    PubMed Central

    Yang, Sejung; Lee, Byung-Uk

    2015-01-01

    In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach. PMID:26352138

  10. Investigation on the effect of nonlinear processes on similarity law in high-pressure argon discharges

    NASA Astrophysics Data System (ADS)

    Fu, Yangyang; Parsey, Guy M.; Verboncoeur, John P.; Christlieb, Andrew J.

    2017-11-01

    In this paper, the effect of nonlinear processes (such as three-body collisions and stepwise ionizations) on the similarity law in high-pressure argon discharges has been studied by the use of the Kinetic Global Model framework. In the discharge model, the ground state argon atoms (Ar), electrons (e), atom ions (Ar+), molecular ions (Ar2+), and fourteen argon excited levels Ar*(4s and 4p) are considered. The steady-state electron and ion densities are obtained with nonlinear processes included and excluded in the designed models, respectively. It is found that in similar gas gaps, keeping the product of gas pressure and linear dimension unchanged, with the nonlinear processes included, the normalized density relations deviate from the similarity relations gradually as the scale-up factor decreases. Without the nonlinear processes, the parameter relations are in good agreement with the similarity law predictions. Furthermore, the pressure and the dimension effects are also investigated separately with and without the nonlinear processes. It is shown that the gas pressure effect on the results is less obvious than the dimension effect. Without the nonlinear processes, the pressure and the dimension effects could be estimated from one to the other based on the similarity relations.

  11. Stochastic Adaptive Estimation and Control.

    DTIC Science & Technology

    1994-10-26

    Marcus, "Language Stability and Stabilizability of Discrete Event Dynamical Systems ," SIAM Journal on Control and Optimization, 31, September 1993...in the hierarchical control of flexible manufacturing systems ; in this problem, the model involves a hybrid process in continuous time whose state is...of the average cost control problem for discrete- time Markov processes. Our exposition covers from finite to Borel state and action spaces and

  12. [Succession caused by beaver (Castor fiber L.) life activity: I. What is learnt from the calibration of a simple Markov model].

    PubMed

    Logofet, D O; Evstigneev, O I; Aleĭnikov, A A; Morozova, A O

    2014-01-01

    A homogeneous Markov chain of three aggregated states "pond--swamp--wood" is proposed as a model of cyclic zoogenic successions caused by beaver (Castor fiber L.) life activity in a forest biogeocoenosis. To calibrate the chain transition matrix, the data have appeared sufficient that were gained from field studies undertaken in "Bryanskii Les" Reserve in the years of 2002-2008. Major outcomes of the calibrated model ensue from the formulae of finite homogeneous Markov chain theory: the stationary probability distribution of states, thematrix (T) of mean first passage times, and the mean durations (M(j)) of succession stages. The former illustrates the distribution of relative areas under succession stages if the current trends and transition rates of succession are conserved in the long-term--it has appeared close to the observed distribution. Matrix T provides for quantitative characteristics of the cyclic process, specifying the ranges the experts proposed for the duration of stages in the conceptual scheme of succession. The calculated values of M(j) detect potential discrepancies between empirical data, the expert knowledge that summarizes the data, and the postulates accepted in the mathematical model. The calculated M2 value falls outside the expert range, which gives a reason to doubt the validity of expert estimation proposed, the aggregation mode chosen for chain states, or/and the accuracy-of data available, i.e., to draw certain "lessons" from partially successful calibration. Refusal to postulate the time homogeneity or the Markov property of the chain is also discussed among possible ways to improve the model.

  13. Reliability analysis using an exponential power model with bathtub-shaped failure rate function: a Bayes study.

    PubMed

    Shehla, Romana; Khan, Athar Ali

    2016-01-01

    Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.

  14. Analysing child mortality in Nigeria with geoadditive discrete-time survival models.

    PubMed

    Adebayo, Samson B; Fahrmeir, Ludwig

    2005-03-15

    Child mortality reflects a country's level of socio-economic development and quality of life. In developing countries, mortality rates are not only influenced by socio-economic, demographic and health variables but they also vary considerably across regions and districts. In this paper, we analysed child mortality in Nigeria with flexible geoadditive discrete-time survival models. This class of models allows us to measure small-area district-specific spatial effects simultaneously with possibly non-linear or time-varying effects of other factors. Inference is fully Bayesian and uses computationally efficient Markov chain Monte Carlo (MCMC) simulation techniques. The application is based on the 1999 Nigeria Demographic and Health Survey. Our method assesses effects at a high level of temporal and spatial resolution not available with traditional parametric models, and the results provide some evidence on how to reduce child mortality by improving socio-economic and public health conditions. Copyright (c) 2004 John Wiley & Sons, Ltd.

  15. Relaxation rates of gene expression kinetics reveal the feedback signs of autoregulatory gene networks

    NASA Astrophysics Data System (ADS)

    Jia, Chen; Qian, Hong; Chen, Min; Zhang, Michael Q.

    2018-03-01

    The transient response to a stimulus and subsequent recovery to a steady state are the fundamental characteristics of a living organism. Here we study the relaxation kinetics of autoregulatory gene networks based on the chemical master equation model of single-cell stochastic gene expression with nonlinear feedback regulation. We report a novel relation between the rate of relaxation, characterized by the spectral gap of the Markov model, and the feedback sign of the underlying gene circuit. When a network has no feedback, the relaxation rate is exactly the decaying rate of the protein. We further show that positive feedback always slows down the relaxation kinetics while negative feedback always speeds it up. Numerical simulations demonstrate that this relation provides a possible method to infer the feedback topology of autoregulatory gene networks by using time-series data of gene expression.

  16. Stochastic Optimal Control via Bellman's Principle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Sun, Jian Q.

    2003-01-01

    This paper presents a method for finding optimal controls of nonlinear systems subject to random excitations. The method is capable to generate global control solutions when state and control constraints are present. The solution is global in the sense that controls for all initial conditions in a region of the state space are obtained. The approach is based on Bellman's Principle of optimality, the Gaussian closure and the Short-time Gaussian approximation. Examples include a system with a state-dependent diffusion term, a system in which the infinite hierarchy of moment equations cannot be analytically closed, and an impact system with a elastic boundary. The uncontrolled and controlled dynamics are studied by creating a Markov chain with a control dependent transition probability matrix via the Generalized Cell Mapping method. In this fashion, both the transient and stationary controlled responses are evaluated. The results show excellent control performances.

  17. Pigouvian taxation of energy for flow and stock externalities and strategic, noncompetitive energy pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wirl, F.

    1994-01-01

    The literature on energy and carbon taxes is by and large concerned about the derivation of (globally) efficient strategies. In contrast, this paper considers the dynamic interactions between cartelized energy suppliers and a consumers' government that collectively taxes energy carriers for Pigouvian motives. Two different kinds of external costs are associated with energy consumption: flow (e.g., acid rain) and stock externalities (e.g., global warming). The dynamic interactions between a consumers' government and a producers' cartel are modeled as a differential game with a subgame perfect Nash equilibrium in linear and nonlinear Markov strategies. The major implications are that the nonlinearmore » solutions are Pareto-inferior to the linear strategies and energy suppliers may preempt energy taxation and thereby may raise the price at front; however, this effect diminishes over time because the producers' price declines, while taxes increase. 22 refs., 5 figs., 1 tab.« less

  18. Rapid assessment of nonlinear optical propagation effects in dielectrics

    PubMed Central

    Hoyo, J. del; de la Cruz, A. Ruiz; Grace, E.; Ferrer, A.; Siegel, J.; Pasquazi, A.; Assanto, G.; Solis, J.

    2015-01-01

    Ultrafast laser processing applications need fast approaches to assess the nonlinear propagation of the laser beam in order to predict the optimal range of processing parameters in a wide variety of cases. We develop here a method based on the simple monitoring of the nonlinear beam shaping against numerical prediction. The numerical code solves the nonlinear Schrödinger equation with nonlinear absorption under simplified conditions by employing a state-of-the art computationally efficient approach. By comparing with experimental results we can rapidly estimate the nonlinear refractive index and nonlinear absorption coefficients of the material. The validity of this approach has been tested in a variety of experiments where nonlinearities play a key role, like spatial soliton shaping or fs-laser waveguide writing. The approach provides excellent results for propagated power densities for which free carrier generation effects can be neglected. Above such a threshold, the peculiarities of the nonlinear propagation of elliptical beams enable acquiring an instantaneous picture of the deposition of energy inside the material realistic enough to estimate the effective nonlinear refractive index and nonlinear absorption coefficients that can be used for predicting the spatial distribution of energy deposition inside the material and controlling the beam in the writing process. PMID:25564243

  19. Rapid assessment of nonlinear optical propagation effects in dielectrics.

    PubMed

    del Hoyo, J; de la Cruz, A Ruiz; Grace, E; Ferrer, A; Siegel, J; Pasquazi, A; Assanto, G; Solis, J

    2015-01-07

    Ultrafast laser processing applications need fast approaches to assess the nonlinear propagation of the laser beam in order to predict the optimal range of processing parameters in a wide variety of cases. We develop here a method based on the simple monitoring of the nonlinear beam shaping against numerical prediction. The numerical code solves the nonlinear Schrödinger equation with nonlinear absorption under simplified conditions by employing a state-of-the art computationally efficient approach. By comparing with experimental results we can rapidly estimate the nonlinear refractive index and nonlinear absorption coefficients of the material. The validity of this approach has been tested in a variety of experiments where nonlinearities play a key role, like spatial soliton shaping or fs-laser waveguide writing. The approach provides excellent results for propagated power densities for which free carrier generation effects can be neglected. Above such a threshold, the peculiarities of the nonlinear propagation of elliptical beams enable acquiring an instantaneous picture of the deposition of energy inside the material realistic enough to estimate the effective nonlinear refractive index and nonlinear absorption coefficients that can be used for predicting the spatial distribution of energy deposition inside the material and controlling the beam in the writing process.

  20. Rapid assessment of nonlinear optical propagation effects in dielectrics

    NASA Astrophysics Data System (ADS)

    Hoyo, J. Del; de La Cruz, A. Ruiz; Grace, E.; Ferrer, A.; Siegel, J.; Pasquazi, A.; Assanto, G.; Solis, J.

    2015-01-01

    Ultrafast laser processing applications need fast approaches to assess the nonlinear propagation of the laser beam in order to predict the optimal range of processing parameters in a wide variety of cases. We develop here a method based on the simple monitoring of the nonlinear beam shaping against numerical prediction. The numerical code solves the nonlinear Schrödinger equation with nonlinear absorption under simplified conditions by employing a state-of-the art computationally efficient approach. By comparing with experimental results we can rapidly estimate the nonlinear refractive index and nonlinear absorption coefficients of the material. The validity of this approach has been tested in a variety of experiments where nonlinearities play a key role, like spatial soliton shaping or fs-laser waveguide writing. The approach provides excellent results for propagated power densities for which free carrier generation effects can be neglected. Above such a threshold, the peculiarities of the nonlinear propagation of elliptical beams enable acquiring an instantaneous picture of the deposition of energy inside the material realistic enough to estimate the effective nonlinear refractive index and nonlinear absorption coefficients that can be used for predicting the spatial distribution of energy deposition inside the material and controlling the beam in the writing process.

  1. Building Simple Hidden Markov Models. Classroom Notes

    ERIC Educational Resources Information Center

    Ching, Wai-Ki; Ng, Michael K.

    2004-01-01

    Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.

  2. Using Games to Teach Markov Chains

    ERIC Educational Resources Information Center

    Johnson, Roger W.

    2003-01-01

    Games are promoted as examples for classroom discussion of stationary Markov chains. In a game context Markov chain terminology and results are made concrete, interesting, and entertaining. Game length for several-player games such as "Hi Ho! Cherry-O" and "Chutes and Ladders" is investigated and new, simple formulas are given. Slight…

  3. Sampling rare fluctuations of discrete-time Markov chains

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  4. Sampling rare fluctuations of discrete-time Markov chains.

    PubMed

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  5. Enhancement of Markov chain model by integrating exponential smoothing: A case study on Muslims marriage and divorce

    NASA Astrophysics Data System (ADS)

    Jamaluddin, Fadhilah; Rahim, Rahela Abdul

    2015-12-01

    Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.

  6. Decentralized learning in Markov games.

    PubMed

    Vrancx, Peter; Verbeeck, Katja; Nowé, Ann

    2008-08-01

    Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games--a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies.

  7. The generalization ability of online SVM classification based on Markov sampling.

    PubMed

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  8. Monitoring as a partially observable decision problem

    Treesearch

    Paul L. Fackler; Robert G. Haight

    2014-01-01

    Monitoring is an important and costly activity in resource man-agement problems such as containing invasive species, protectingendangered species, preventing soil erosion, and regulating con-tracts for environmental services. Recent studies have viewedoptimal monitoring as a Partially Observable Markov Decision Pro-cess (POMDP), which provides a framework for...

  9. A computer program for uncertainty analysis integrating regression and Bayesian methods

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  10. Non-Markovian properties and multiscale hidden Markovian network buried in single molecule time series

    NASA Astrophysics Data System (ADS)

    Sultana, Tahmina; Takagi, Hiroaki; Morimatsu, Miki; Teramoto, Hiroshi; Li, Chun-Biu; Sako, Yasushi; Komatsuzaki, Tamiki

    2013-12-01

    We present a novel scheme to extract a multiscale state space network (SSN) from single-molecule time series. The multiscale SSN is a type of hidden Markov model that takes into account both multiple states buried in the measurement and memory effects in the process of the observable whenever they exist. Most biological systems function in a nonstationary manner across multiple timescales. Combined with a recently established nonlinear time series analysis based on information theory, a simple scheme is proposed to deal with the properties of multiscale and nonstationarity for a discrete time series. We derived an explicit analytical expression of the autocorrelation function in terms of the SSN. To demonstrate the potential of our scheme, we investigated single-molecule time series of dissociation and association kinetics between epidermal growth factor receptor (EGFR) on the plasma membrane and its adaptor protein Ash/Grb2 (Grb2) in an in vitro reconstituted system. We found that our formula successfully reproduces their autocorrelation function for a wide range of timescales (up to 3 s), and the underlying SSNs change their topographical structure as a function of the timescale; while the corresponding SSN is simple at the short timescale (0.033-0.1 s), the SSN at the longer timescales (0.1 s to ˜3 s) becomes rather complex in order to capture multiscale nonstationary kinetics emerging at longer timescales. It is also found that visiting the unbound form of the EGFR-Grb2 system approximately resets all information of history or memory of the process.

  11. Stochastic Models in the DORIS Position Time Series: Estimates from the IDS Contribution to the ITRF2014

    NASA Astrophysics Data System (ADS)

    Klos, A.; Bogusz, J.; Moreaux, G.

    2017-12-01

    This research focuses on the investigation of the deterministic and stochastic parts of the DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) weekly coordinate time series from the IDS contribution to the ITRF2014A set of 90 stations was divided into three groups depending on when the data was collected at an individual station. To reliably describe the DORIS time series, we employed a mathematical model that included the long-term nonlinear signal, linear trend, seasonal oscillations (these three sum up to produce the Polynomial Trend Model) and a stochastic part, all being resolved with Maximum Likelihood Estimation (MLE). We proved that the values of the parameters delivered for DORIS data are strictly correlated with the time span of the observations, meaning that the most recent data are the most reliable ones. Not only did the seasonal amplitudes decrease over the years, but also, and most importantly, the noise level and its type changed significantly. We examined five different noise models to be applied to the stochastic part of the DORIS time series: a pure white noise (WN), a pure power-law noise (PL), a combination of white and power-law noise (WNPL), an autoregressive process of first order (AR(1)) and a Generalized Gauss Markov model (GGM). From our study it arises that the PL process may be chosen as the preferred one for most of the DORIS data. Moreover, the preferred noise model has changed through the years from AR(1) to pure PL with few stations characterized by a positive spectral index.

  12. Detection of multiple airborne targets from multisensor data

    NASA Astrophysics Data System (ADS)

    Foltz, Mark A.; Srivastava, Anuj; Miller, Michael I.; Grenander, Ulf

    1995-08-01

    Previously we presented a jump-diffusion based random sampling algorithm for generating conditional mean estimates of scene representations for the tracking and recongition of maneuvering airborne targets. These representations include target positions and orientations along their trajectories and the target type associated with each trajectory. Taking a Bayesian approach, a posterior measure is defined on the parameter space by combining sensor models with a sophisticated prior based on nonlinear airplane dynamics. The jump-diffusion algorithm constructs a Markov process which visits the elements of the parameter space with frequencies proportional to the posterior probability. It consititutes both the infinitesimal, local search via a sample path continuous diffusion transform and the larger, global steps through discrete jump moves. The jump moves involve the addition and deletion of elements from the scene configuration or changes in the target type assoviated with each target trajectory. One such move results in target detection by the addition of a track seed to the inference set. This provides initial track data for the tracking/recognition algorithm to estimate linear graph structures representing tracks using the other jump moves and the diffusion process, as described in our earlier work. Target detection ideally involves a continuous research over a continuum of the observation space. In this work we conclude that for practical implemenations the search space must be discretized with lattice granularity comparable to sensor resolution, and discuss how fast Fourier transforms are utilized for efficient calcuation of sufficient statistics given our array models. Some results are also presented from our implementation on a networked system including a massively parallel machine architecture and a silicon graphics onyx workstation.

  13. Cascading second-order nonlinear processes in a lithium niobate-on-insulator microdisk.

    PubMed

    Liu, Shijie; Zheng, Yuanlin; Chen, Xianfeng

    2017-09-15

    Whispering-gallery-mode (WGM) microcavities are very important in both fundamental science and practical applications, among which on-chip second-order nonlinear microresonators play an important role in integrated photonic functionalities. Here we demonstrate resonant second-harmonic generation (SHG) and cascaded third-harmonic generation (THG) in a lithium niobate-on-insulator (LNOI) microdisk resonator. Efficient SHG in the visible range was obtained with only several mW input powers at telecom wavelengths. THG was also observed through a cascading process, which reveals simultaneous phase matching and strong mode coupling in the resonator. Cascading of second-order nonlinear processes gives rise to an effectively large third-order nonlinearity, which makes on-chip second-order nonlinear microresonators a promising frequency converter for integrated nonlinear photonics.

  14. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1992-11-01

    The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.

  15. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1991-11-01

    The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.

  16. Estimation of sojourn time in chronic disease screening without data on interval cases.

    PubMed

    Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W

    2000-03-01

    Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.

  17. A toolbox for safety instrumented system evaluation based on improved continuous-time Markov chain

    NASA Astrophysics Data System (ADS)

    Wardana, Awang N. I.; Kurniady, Rahman; Pambudi, Galih; Purnama, Jaka; Suryopratomo, Kutut

    2017-08-01

    Safety instrumented system (SIS) is designed to restore a plant into a safe condition when pre-hazardous event is occur. It has a vital role especially in process industries. A SIS shall be meet with safety requirement specifications. To confirm it, SIS shall be evaluated. Typically, the evaluation is calculated by hand. This paper presents a toolbox for SIS evaluation. It is developed based on improved continuous-time Markov chain. The toolbox supports to detailed approach of evaluation. This paper also illustrates an industrial application of the toolbox to evaluate arch burner safety system of primary reformer. The results of the case study demonstrates that the toolbox can be used to evaluate industrial SIS in detail and to plan the maintenance strategy.

  18. The Discounted Method and Equivalence of Average Criteria for Risk-Sensitive Markov Decision Processes on Borel Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m

    2010-04-15

    This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less

  19. Using Markov state models to study self-assembly

    NASA Astrophysics Data System (ADS)

    Perkett, Matthew R.; Hagan, Michael F.

    2014-06-01

    Markov state models (MSMs) have been demonstrated to be a powerful method for computationally studying intramolecular processes such as protein folding and macromolecular conformational changes. In this article, we present a new approach to construct MSMs that is applicable to modeling a broad class of multi-molecular assembly reactions. Distinct structures formed during assembly are distinguished by their undirected graphs, which are defined by strong subunit interactions. Spatial inhomogeneities of free subunits are accounted for using a recently developed Gaussian-based signature. Simplifications to this state identification are also investigated. The feasibility of this approach is demonstrated on two different coarse-grained models for virus self-assembly. We find good agreement between the dynamics predicted by the MSMs and long, unbiased simulations, and that the MSMs can reduce overall simulation time by orders of magnitude.

  20. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

Top