Sample records for endpoint-conditioned continuous-time markov

  1. SIMULATION FROM ENDPOINT-CONDITIONED, CONTINUOUS-TIME MARKOV CHAINS ON A FINITE STATE SPACE, WITH APPLICATIONS TO MOLECULAR EVOLUTION.

    PubMed

    Hobolth, Asger; Stone, Eric A

    2009-09-01

    Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.

  2. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    PubMed

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  3. CTPPL: A Continuous Time Probabilistic Programming Language

    DTIC Science & Technology

    2009-07-01

    recent years there has been a flurry of interest in continuous time models, mostly focused on continuous time Bayesian networks ( CTBNs ) [Nodelman, 2007... CTBNs are built on homogenous Markov processes. A homogenous Markov pro- cess is a finite state, continuous time process, consisting of an initial...q1 : xn()] ... Some state transitions can produce emissions. In a CTBN , each variable has a conditional inten- sity matrix Qu for every combination of

  4. Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression

    PubMed Central

    Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.

    2016-01-01

    The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571

  5. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium.

    PubMed

    Kapfer, Sebastian C; Krauth, Werner

    2017-12-15

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  6. Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium

    NASA Astrophysics Data System (ADS)

    Kapfer, Sebastian C.; Krauth, Werner

    2017-12-01

    We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.

  7. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less

  8. A mathematical approach for evaluating Markov models in continuous time without discrete-event simulation.

    PubMed

    van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F

    2013-08-01

    Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.

  9. Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains

    PubMed Central

    Meyer, Denny; Forbes, Don; Clarke, Stephen R.

    2006-01-01

    Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key Points A comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition. The Markov assumption appears to be valid. However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play. Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes. PMID:24357946

  10. Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains.

    PubMed

    Meyer, Denny; Forbes, Don; Clarke, Stephen R

    2006-01-01

    Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key PointsA comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition.The Markov assumption appears to be valid.However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play.Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes.

  11. Nonequilibrium thermodynamic potentials for continuous-time Markov chains.

    PubMed

    Verley, Gatien

    2016-01-01

    We connect the rare fluctuations of an equilibrium (EQ) process and the typical fluctuations of a nonequilibrium (NE) stationary process. In the framework of large deviation theory, this observation allows us to introduce NE thermodynamic potentials. For continuous-time Markov chains, we identify the relevant pairs of conjugated variables and propose two NE ensembles: one with fixed dynamics and fluctuating time-averaged variables, and another with fixed time-averaged variables, but a fluctuating dynamics. Accordingly, we show that NE processes are equivalent to conditioned EQ processes ensuring that NE potentials are Legendre dual. We find a variational principle satisfied by the NE potentials that reach their maximum in the NE stationary state and whose first derivatives produce the NE equations of state and second derivatives produce the NE Maxwell relations generalizing the Onsager reciprocity relations.

  12. From empirical data to time-inhomogeneous continuous Markov processes.

    PubMed

    Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G

    2016-03-01

    We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.

  13. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  14. Sensitivity Study for Long Term Reliability

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2008-01-01

    This paper illustrates using Markov models to establish system and maintenance requirements for small electronic controllers where the goal is a high probability of continuous service for a long period of time. The system and maintenance items considered are quality of components, various degrees of simple redundancy, redundancy with reconfiguration, diagnostic levels, periodic maintenance, and preventive maintenance. Markov models permit a quantitative investigation with comparison and contrast. An element of special interest is the use of conditional probability to study the combination of imperfect diagnostics and periodic maintenance.

  15. A toolbox for safety instrumented system evaluation based on improved continuous-time Markov chain

    NASA Astrophysics Data System (ADS)

    Wardana, Awang N. I.; Kurniady, Rahman; Pambudi, Galih; Purnama, Jaka; Suryopratomo, Kutut

    2017-08-01

    Safety instrumented system (SIS) is designed to restore a plant into a safe condition when pre-hazardous event is occur. It has a vital role especially in process industries. A SIS shall be meet with safety requirement specifications. To confirm it, SIS shall be evaluated. Typically, the evaluation is calculated by hand. This paper presents a toolbox for SIS evaluation. It is developed based on improved continuous-time Markov chain. The toolbox supports to detailed approach of evaluation. This paper also illustrates an industrial application of the toolbox to evaluate arch burner safety system of primary reformer. The results of the case study demonstrates that the toolbox can be used to evaluate industrial SIS in detail and to plan the maintenance strategy.

  16. Method and apparatus for obtaining complete speech signals for speech recognition applications

    NASA Technical Reports Server (NTRS)

    Abrash, Victor (Inventor); Cesari, Federico (Inventor); Franco, Horacio (Inventor); George, Christopher (Inventor); Zheng, Jing (Inventor)

    2009-01-01

    The present invention relates to a method and apparatus for obtaining complete speech signals for speech recognition applications. In one embodiment, the method continuously records an audio stream comprising a sequence of frames to a circular buffer. When a user command to commence or terminate speech recognition is received, the method obtains a number of frames of the audio stream occurring before or after the user command in order to identify an augmented audio signal for speech recognition processing. In further embodiments, the method analyzes the augmented audio signal in order to locate starting and ending speech endpoints that bound at least a portion of speech to be processed for recognition. At least one of the speech endpoints is located using a Hidden Markov Model.

  17. Selecting surrogate endpoints for estimating pesticide effects on avian reproductive success

    EPA Science Inventory

    A Markov chain nest productivity model (MCnest) has been developed for projecting the effects of a specific pesticide-use scenario on the annual reproductive success of avian species of concern. A critical element in MCnest is the use of surrogate endpoints, defined as measured ...

  18. Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qingda, E-mail: weiqd@hqu.edu.cn; Chen, Xian, E-mail: chenxian@amss.ac.cn

    In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation andmore » obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.« less

  19. Open Markov Processes and Reaction Networks

    ERIC Educational Resources Information Center

    Swistock Pollard, Blake Stephen

    2017-01-01

    We begin by defining the concept of "open" Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain "boundary" states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow…

  20. A Markov chain technique for determining the acquisition behavior of a digital tracking loop

    NASA Technical Reports Server (NTRS)

    Chadwick, H. D.

    1972-01-01

    An iterative procedure is presented for determining the acquisition behavior of discrete or digital implementations of a tracking loop. The technique is based on the theory of Markov chains and provides the cumulative probability of acquisition in the loop as a function of time in the presence of noise and a given set of initial condition probabilities. A digital second-order tracking loop to be used in the Viking command receiver for continuous tracking of the command subcarrier phase was analyzed using this technique, and the results agree closely with experimental data.

  1. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    PubMed

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.

  2. Educational Aspirations: Markov and Poisson Models. Rural Industrial Development Project Working Paper Number 14, August 1971.

    ERIC Educational Resources Information Center

    Kayser, Brian D.

    The fit of educational aspirations of Illinois rural high school youths to 3 related one-parameter mathematical models was investigated. The models used were the continuous-time Markov chain model, the discrete-time Markov chain, and the Poisson distribution. The sample of 635 students responded to questionnaires from 1966 to 1969 as part of an…

  3. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    PubMed

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  4. A novel grey-fuzzy-Markov and pattern recognition model for industrial accident forecasting

    NASA Astrophysics Data System (ADS)

    Edem, Inyeneobong Ekoi; Oke, Sunday Ayoola; Adebiyi, Kazeem Adekunle

    2017-10-01

    Industrial forecasting is a top-echelon research domain, which has over the past several years experienced highly provocative research discussions. The scope of this research domain continues to expand due to the continuous knowledge ignition motivated by scholars in the area. So, more intelligent and intellectual contributions on current research issues in the accident domain will potentially spark more lively academic, value-added discussions that will be of practical significance to members of the safety community. In this communication, a new grey-fuzzy-Markov time series model, developed from nondifferential grey interval analytical framework has been presented for the first time. This instrument forecasts future accident occurrences under time-invariance assumption. The actual contribution made in the article is to recognise accident occurrence patterns and decompose them into grey state principal pattern components. The architectural framework of the developed grey-fuzzy-Markov pattern recognition (GFMAPR) model has four stages: fuzzification, smoothening, defuzzification and whitenisation. The results of application of the developed novel model signify that forecasting could be effectively carried out under uncertain conditions and hence, positions the model as a distinctly superior tool for accident forecasting investigations. The novelty of the work lies in the capability of the model in making highly accurate predictions and forecasts based on the availability of small or incomplete accident data.

  5. Derivation of Markov processes that violate detailed balance

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2018-03-01

    Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.

  6. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    PubMed

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  7. How old is this bird? The age distribution under some phase sampling schemes.

    PubMed

    Hautphenne, Sophie; Massaro, Melanie; Taylor, Peter

    2017-12-01

    In this paper, we use a finite-state continuous-time Markov chain with one absorbing state to model an individual's lifetime. Under this model, the time of death follows a phase-type distribution, and the transient states of the Markov chain are known as phases. We then attempt to provide an answer to the simple question "What is the conditional age distribution of the individual, given its current phase"? We show that the answer depends on how we interpret the question, and in particular, on the phase observation scheme under consideration. We then apply our results to the computation of the age pyramid for the endangered Chatham Island black robin Petroica traversi during the monitoring period 2007-2014.

  8. Automated recognition of bird song elements from continuous recordings using dynamic time warping and hidden Markov models: a comparative study.

    PubMed

    Kogan, J A; Margoliash, D

    1998-04-01

    The performance of two techniques is compared for automated recognition of bird song units from continuous recordings. The advantages and limitations of dynamic time warping (DTW) and hidden Markov models (HMMs) are evaluated on a large database of male songs of zebra finches (Taeniopygia guttata) and indigo buntings (Passerina cyanea), which have different types of vocalizations and have been recorded under different laboratory conditions. Depending on the quality of recordings and complexity of song, the DTW-based technique gives excellent to satisfactory performance. Under challenging conditions such as noisy recordings or presence of confusing short-duration calls, good performance of the DTW-based technique requires careful selection of templates that may demand expert knowledge. Because HMMs are trained, equivalent or even better performance of HMMs can be achieved based only on segmentation and labeling of constituent vocalizations, albeit with many more training examples than DTW templates. One weakness in HMM performance is the misclassification of short-duration vocalizations or song units with more variable structure (e.g., some calls, and syllables of plastic songs). To address these and other limitations, new approaches for analyzing bird vocalizations are discussed.

  9. Fast-slow asymptotics for a Markov chain model of fast sodium current

    NASA Astrophysics Data System (ADS)

    Starý, Tomáš; Biktashev, Vadim N.

    2017-09-01

    We explore the feasibility of using fast-slow asymptotics to eliminate the computational stiffness of discrete-state, continuous-time deterministic Markov chain models of ionic channels underlying cardiac excitability. We focus on a Markov chain model of fast sodium current, and investigate its asymptotic behaviour with respect to small parameters identified in different ways.

  10. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  11. A discrete Markov metapopulation model for persistence and extinction of species.

    PubMed

    Thompson, Colin J; Shtilerman, Elad; Stone, Lewi

    2016-09-07

    A simple discrete generation Markov metapopulation model is formulated for studying the persistence and extinction dynamics of a species in a given region which is divided into a large number of sites or patches. Assuming a linear site occupancy probability from one generation to the next we obtain exact expressions for the time evolution of the expected number of occupied sites and the mean-time to extinction (MTE). Under quite general conditions we show that the MTE, to leading order, is proportional to the logarithm of the initial number of occupied sites and in precise agreement with similar expressions for continuous time-dependent stochastic models. Our key contribution is a novel application of generating function techniques and simple asymptotic methods to obtain a second order asymptotic expression for the MTE which is extremely accurate over the entire range of model parameter values. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Markov switching multinomial logit model: An application to accident-injury severities.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2009-07-01

    In this study, two-state Markov switching multinomial logit models are proposed for statistical modeling of accident-injury severities. These models assume Markov switching over time between two unobserved states of roadway safety as a means of accounting for potential unobserved heterogeneity. The states are distinct in the sense that in different states accident-severity outcomes are generated by separate multinomial logit processes. To demonstrate the applicability of the approach, two-state Markov switching multinomial logit models are estimated for severity outcomes of accidents occurring on Indiana roads over a four-year time period. Bayesian inference methods and Markov Chain Monte Carlo (MCMC) simulations are used for model estimation. The estimated Markov switching models result in a superior statistical fit relative to the standard (single-state) multinomial logit models for a number of roadway classes and accident types. It is found that the more frequent state of roadway safety is correlated with better weather conditions and that the less frequent state is correlated with adverse weather conditions.

  13. Parallel algorithms for simulating continuous time Markov chains

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  14. Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.

    PubMed

    Dixit, Purushottam D; Dill, Ken A

    2018-02-13

    Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.

  15. The distribution of genome shared identical by descent for a pair of full sibs by means of the continuous time Markov chain

    NASA Astrophysics Data System (ADS)

    Julie, Hongki; Pasaribu, Udjianna S.; Pancoro, Adi

    2015-12-01

    This paper will allow Markov Chain's application in genome shared identical by descent by two individual at full sibs model. The full sibs model was a continuous time Markov Chain with three state. In the full sibs model, we look for the cumulative distribution function of the number of sub segment which have 2 IBD haplotypes from a segment of the chromosome which the length is t Morgan and the cumulative distribution function of the number of sub segment which have at least 1 IBD haplotypes from a segment of the chromosome which the length is t Morgan. This cumulative distribution function will be developed by the moment generating function.

  16. Modeling hard clinical end-point data in economic analyses.

    PubMed

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (<7). Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.

  17. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  18. Conditioned Limit Theorems for Some Null Recurrent Markov Processes

    DTIC Science & Technology

    1976-08-01

    Chapter 1 INTRODUCTION 1.1 Summary of Results Let (Vk, k ! 0) be a discrete time Markov process with state space EC(- , ) and let S be...explain our results in some detail. 2 We begin by stating our three basic assumptions: (1) vk s k 2 0 Is a Markov process with state space E C(-o,%); (Ii... 12 n 3. CONDITIONING ON T (, > n.................................1.9 3.1 Preliminary Results

  19. [The endpoint detection of cough signal in continuous speech].

    PubMed

    Yang, Guoqing; Mo, Hongqiang; Li, Wen; Lian, Lianfang; Zheng, Zeguang

    2010-06-01

    The endpoint detection of cough signal in continuous speech has been researched in order to improve the efficiency and veracity of manual recognition or computer-based automatic recognition. First, using the short time zero crossing ratio(ZCR) for identifying the suspicious coughs and getting the threshold of short time energy based on acoustic characteristics of cough. Then, the short time energy is combined with short time ZCR in order to implement the endpoint detection of cough in continuous speech. To evaluate the effect of the method, first, the virtual number of coughs in each recording was identified by two experienced doctors using the graphical user interface (GUI). Second, the recordings were analyzed by automatic endpoint detection program under Matlab7.0. Finally, the comparison between these two results showed: The error rate of undetected cough is 2.18%, and 98.13% of noise, silence and speech were removed. The way of setting short time energy threshold is robust. The endpoint detection program can remove most speech and noise, thus maintaining a lower rate of error.

  20. Generalized master equation via aging continuous-time random walks.

    PubMed

    Allegrini, Paolo; Aquino, Gerardo; Grigolini, Paolo; Palatella, Luigi; Rosa, Angelo

    2003-11-01

    We discuss the problem of the equivalence between continuous-time random walk (CTRW) and generalized master equation (GME). The walker, making instantaneous jumps from one site of the lattice to another, resides in each site for extended times. The sojourn times have a distribution density psi(t) that is assumed to be an inverse power law with the power index micro. We assume that the Onsager principle is fulfilled, and we use this assumption to establish a complete equivalence between GME and the Montroll-Weiss CTRW. We prove that this equivalence is confined to the case where psi(t) is an exponential. We argue that is so because the Montroll-Weiss CTRW, as recently proved by Barkai [E. Barkai, Phys. Rev. Lett. 90, 104101 (2003)], is nonstationary, thereby implying aging, while the Onsager principle is valid only in the case of fully aged systems. The case of a Poisson distribution of sojourn times is the only one with no aging associated to it, and consequently with no need to establish special initial conditions to fulfill the Onsager principle. We consider the case of a dichotomous fluctuation, and we prove that the Onsager principle is fulfilled for any form of regression to equilibrium provided that the stationary condition holds true. We set the stationary condition on both the CTRW and the GME, thereby creating a condition of total equivalence, regardless of the nature of the waiting-time distribution. As a consequence of this procedure we create a GME that is a bona fide master equation, in spite of being non-Markov. We note that the memory kernel of the GME affords information on the interaction between system of interest and its bath. The Poisson case yields a bath with infinitely fast fluctuations. We argue that departing from the Poisson form has the effect of creating a condition of infinite memory and that these results might be useful to shed light on the problem of how to unravel non-Markov quantum master equations.

  1. Markov Chain Models for Stochastic Behavior in Resonance Overlap Regions

    NASA Astrophysics Data System (ADS)

    McCarthy, Morgan; Quillen, Alice

    2018-01-01

    We aim to predict lifetimes of particles in chaotic zoneswhere resonances overlap. A continuous-time Markov chain model isconstructed using mean motion resonance libration timescales toestimate transition times between resonances. The model is applied todiffusion in the co-rotation region of a planet. For particles begunat low eccentricity, the model is effective for early diffusion, butnot at later time when particles experience close encounters to the planet.

  2. The Embedding Problem for Markov Models of Nucleotide Substitution

    PubMed Central

    Verbyla, Klara L.; Yap, Von Bing; Pahwa, Anuj; Shao, Yunli; Huttley, Gavin A.

    2013-01-01

    Continuous-time Markov processes are often used to model the complex natural phenomenon of sequence evolution. To make the process of sequence evolution tractable, simplifying assumptions are often made about the sequence properties and the underlying process. The validity of one such assumption, time-homogeneity, has never been explored. Violations of this assumption can be found by identifying non-embeddability. A process is non-embeddable if it can not be embedded in a continuous time-homogeneous Markov process. In this study, non-embeddability was demonstrated to exist when modelling sequence evolution with Markov models. Evidence of non-embeddability was found primarily at the third codon position, possibly resulting from changes in mutation rate over time. Outgroup edges and those with a deeper time depth were found to have an increased probability of the underlying process being non-embeddable. Overall, low levels of non-embeddability were detected when examining individual edges of triads across a diverse set of alignments. Subsequent phylogenetic reconstruction analyses demonstrated that non-embeddability could impact on the correct prediction of phylogenies, but at extremely low levels. Despite the existence of non-embeddability, there is minimal evidence of violations of the local time homogeneity assumption and consequently the impact is likely to be minor. PMID:23935949

  3. Large Deviations for Stationary Probabilities of a Family of Continuous Time Markov Chains via Aubry-Mather Theory

    NASA Astrophysics Data System (ADS)

    Lopes, Artur O.; Neumann, Adriana

    2015-05-01

    In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.

  4. Utah State University Global Assimilation of Ionospheric Measurements Gauss-Markov Kalman filter model of the ionosphere: Model description and validation

    NASA Astrophysics Data System (ADS)

    Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Thompson, D. C.; Zhu, L.

    2006-11-01

    The Utah State University Gauss-Markov Kalman Filter (GMKF) was developed as part of the Global Assimilation of Ionospheric Measurements (GAIM) program. The GMKF uses a physics-based model of the ionosphere and a Gauss-Markov Kalman filter as a basis for assimilating a diverse set of real-time (or near real-time) observations. The physics-based model is the Ionospheric Forecast Model (IFM), which accounts for five ion species and covers the E region, F region, and the topside from 90 to 1400 km altitude. Within the GMKF, the IFM derived ionospheric densities constitute a background density field on which perturbations are superimposed based on the available data and their errors. In the current configuration, the GMKF assimilates slant total electron content (TEC) from a variable number of global positioning satellite (GPS) ground sites, bottomside electron density (Ne) profiles from a variable number of ionosondes, in situ Ne from four Defense Meteorological Satellite Program (DMSP) satellites, and nighttime line-of-sight ultraviolet (UV) radiances measured by satellites. To test the GMKF for real-time operations and to validate its ionospheric density specifications, we have tested the model performance for a variety of geophysical conditions. During these model runs various combination of data types and data quantities were assimilated. To simulate real-time operations, the model ran continuously and automatically and produced three-dimensional global electron density distributions in 15 min increments. In this paper we will describe the Gauss-Markov Kalman filter model and present results of our validation study, with an emphasis on comparisons with independent observations.

  5. Markov-modulated Markov chains and the covarion process of molecular evolution.

    PubMed

    Galtier, N; Jean-Marie, A

    2004-01-01

    The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.

  6. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  7. Selecting surrogate endpoints for estimating pesticide effects on avian reproductive success.

    PubMed

    Bennett, Richard S; Etterson, Matthew A

    2013-10-01

    A Markov chain nest productivity model (MCnest) has been developed for projecting the effects of a specific pesticide-use scenario on the annual reproductive success of avian species of concern. A critical element in MCnest is the use of surrogate endpoints, defined as measured endpoints from avian toxicity tests that represent specific types of effects possible in field populations at specific phases of a nesting attempt. In this article, we discuss the attributes of surrogate endpoints and provide guidance for selecting surrogates from existing avian laboratory tests as well as other possible sources. We also discuss some of the assumptions and uncertainties related to using surrogate endpoints to represent field effects. The process of explicitly considering how toxicity test results can be used to assess effects in the field helps identify uncertainties and data gaps that could be targeted in higher-tier risk assessments. © 2013 SETAC.

  8. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  9. Application of Markov Models for Analysis of Development of Psychological Characteristics

    ERIC Educational Resources Information Center

    Kuravsky, Lev S.; Malykh, Sergey B.

    2004-01-01

    A technique to study combined influence of environmental and genetic factors on the base of changes in phenotype distributions is presented. Histograms are exploited as base analyzed characteristics. A continuous time, discrete state Markov process with piece-wise constant interstate transition rates is associated with evolution of each histogram.…

  10. Multi-category micro-milling tool wear monitoring with continuous hidden Markov models

    NASA Astrophysics Data System (ADS)

    Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon

    2009-02-01

    In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.

  11. Machine learning in sentiment reconstruction of the simulated stock market

    NASA Astrophysics Data System (ADS)

    Goykhman, Mikhail; Teimouri, Ali

    2018-02-01

    In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.

  12. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  13. Markov chain aggregation and its applications to combinatorial reaction networks.

    PubMed

    Ganguly, Arnab; Petrov, Tatjana; Koeppl, Heinz

    2014-09-01

    We consider a continuous-time Markov chain (CTMC) whose state space is partitioned into aggregates, and each aggregate is assigned a probability measure. A sufficient condition for defining a CTMC over the aggregates is presented as a variant of weak lumpability, which also characterizes that the measure over the original process can be recovered from that of the aggregated one. We show how the applicability of de-aggregation depends on the initial distribution. The application section is devoted to illustrate how the developed theory aids in reducing CTMC models of biochemical systems particularly in connection to protein-protein interactions. We assume that the model is written by a biologist in form of site-graph-rewrite rules. Site-graph-rewrite rules compactly express that, often, only a local context of a protein (instead of a full molecular species) needs to be in a certain configuration in order to trigger a reaction event. This observation leads to suitable aggregate Markov chains with smaller state spaces, thereby providing sufficient reduction in computational complexity. This is further exemplified in two case studies: simple unbounded polymerization and early EGFR/insulin crosstalk.

  14. [Efficiency of a postoperative treatment after rotator cuff repair with a continuous passive motion device (CPM)].

    PubMed

    Michael, J W-P; König, D P; Imhoff, A B; Martinek, V; Braun, S; Hübscher, M; Koch, C; Dreithaler, B; Bernholt, J; Preis, S; Loew, M; Rickert, M; Speck, M; Bös, L; Bidner, A; Eysel, P

    2005-01-01

    The main objective of this study was to prove that a postoperative combined continuous passive motion (CPM) and physiotherapy treatment protocol (CPM group) can achieve 90 degrees active abduction in the shoulder joint earlier than physiotherapy alone (PT group). The indication was a complete tear of the rotator cuff. The study was conducted under in-patient and out-patient conditions. 55 patients were included in this study. The prospective, randomized multicenter study design complies with DIN EN 540. The primary endpoint was the time span until 90 degrees active abduction was achieved by the patients. Patients in the CPM group reached the primary endpoint on average 12 days earlier than the control group. This difference was statistically significant (p = 0.0292). Analyzing the secondary endpoints, e. g., pain and disablement, the results in the CPM group showed again advantages of the combined treatment protocol (CPM + physiotherapy). The postoperative treatment of a total tear of the rotator cuff with a combined continuous passive motion and physiotherapy protocol provided a significantly earlier range of motion in the shoulder joint than physiotherapy alone. There was no report of CPM-related adverse effects.

  15. Application of stochastic automata networks for creation of continuous time Markov chain models of voltage gating of gap junction channels.

    PubMed

    Snipas, Mindaugas; Pranevicius, Henrikas; Pranevicius, Mindaugas; Pranevicius, Osvaldas; Paulauskas, Nerijus; Bukauskas, Feliksas F

    2015-01-01

    The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ~20 times.

  16. Dependability and performability analysis

    NASA Technical Reports Server (NTRS)

    Trivedi, Kishor S.; Ciardo, Gianfranco; Malhotra, Manish; Sahner, Robin A.

    1993-01-01

    Several practical issues regarding specifications and solution of dependability and performability models are discussed. Model types with and without rewards are compared. Continuous-time Markov chains (CTMC's) are compared with (continuous-time) Markov reward models (MRM's) and generalized stochastic Petri nets (GSPN's) are compared with stochastic reward nets (SRN's). It is shown that reward-based models could lead to more concise model specifications and solution of a variety of new measures. With respect to the solution of dependability and performability models, three practical issues were identified: largeness, stiffness, and non-exponentiality, and a variety of approaches are discussed to deal with them, including some of the latest research efforts.

  17. Three real-time architectures - A study using reward models

    NASA Technical Reports Server (NTRS)

    Sjogren, J. A.; Smith, R. M.

    1990-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the evolutionary behavior of the computer system by a continuous-time Markov chain, and a reward rate is associated with each state. In reliability/availability models, upstates have reward rate 1, and down states have reward rate zero associated with them. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Steady-state expected reward rate and expected instantaneous reward rate are clearly useful measures which can be extracted from the Markov reward model. The diversity of areas where Markov reward models may be used is illustrated with a comparative study of three examples of interest to the fault tolerant computing community.

  18. Constructing 1/omegaalpha noise from reversible Markov chains.

    PubMed

    Erland, Sveinung; Greenwood, Priscilla E

    2007-09-01

    This paper gives sufficient conditions for the output of 1/omegaalpha noise from reversible Markov chains on finite state spaces. We construct several examples exhibiting this behavior in a specified range of frequencies. We apply simple representations of the covariance function and the spectral density in terms of the eigendecomposition of the probability transition matrix. The results extend to hidden Markov chains. We generalize the results for aggregations of AR1-processes of C. W. J. Granger [J. Econometrics 14, 227 (1980)]. Given the eigenvalue function, there is a variety of ways to assign values to the states such that the 1/omegaalpha condition is satisfied. We show that a random walk on a certain state space is complementary to the point process model of 1/omega noise of B. Kaulakys and T. Meskauskas [Phys. Rev. E 58, 7013 (1998)]. Passing to a continuous state space, we construct 1/omegaalpha noise which also has a long memory.

  19. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  20. The Impact of Worsening Heart Failure in the United States

    PubMed Central

    Cooper, Lauren B.; DeVore, Adam D.; Felker, G. Michael

    2015-01-01

    Synopsis In-hospital worsening heart failure represents a clinical scenario in which a patient hospitalized for treatment of acute heart failure experiences a worsening of their condition while in the hospital, requiring escalation of therapy. In-hospital worsening heart failure is associated with worse in-hospital and post-discharge outcomes. In-hospital worsening heart failure is increasingly being used as an endpoint, or as part of a combined endpoint, in many clinical trials in acute heart failure. This endpoint has advantages over other endpoints commonly used in acute and chronic heart failure trials, such as dyspnea relief and mortality or rehospitalization. Despite the extensive study of this condition, no treatment strategies have been approved for the prevention of this condition. However, several prediction models have been developed to identify worsening heart failure. Continued study in this area is warranted. PMID:26462100

  1. Application of Stochastic Automata Networks for Creation of Continuous Time Markov Chain Models of Voltage Gating of Gap Junction Channels

    PubMed Central

    Pranevicius, Henrikas; Pranevicius, Mindaugas; Pranevicius, Osvaldas; Bukauskas, Feliksas F.

    2015-01-01

    The primary goal of this work was to study advantages of numerical methods used for the creation of continuous time Markov chain models (CTMC) of voltage gating of gap junction (GJ) channels composed of connexin protein. This task was accomplished by describing gating of GJs using the formalism of the stochastic automata networks (SANs), which allowed for very efficient building and storing of infinitesimal generator of the CTMC that allowed to produce matrices of the models containing a distinct block structure. All of that allowed us to develop efficient numerical methods for a steady-state solution of CTMC models. This allowed us to accelerate CPU time, which is necessary to solve CTMC models, ∼20 times. PMID:25705700

  2. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    PubMed Central

    Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide

    2017-01-01

    Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889

  3. A compositional framework for Markov processes

    NASA Astrophysics Data System (ADS)

    Baez, John C.; Fong, Brendan; Pollard, Blake S.

    2016-03-01

    We define the concept of an "open" Markov process, or more precisely, continuous-time Markov chain, which is one where probability can flow in or out of certain states called "inputs" and "outputs." One can build up a Markov process from smaller open pieces. This process is formalized by making open Markov processes into the morphisms of a dagger compact category. We show that the behavior of a detailed balanced open Markov process is determined by a principle of minimum dissipation, closely related to Prigogine's principle of minimum entropy production. Using this fact, we set up a functor mapping open detailed balanced Markov processes to open circuits made of linear resistors. We also describe how to "black box" an open Markov process, obtaining the linear relation between input and output data that holds in any steady state, including nonequilibrium steady states with a nonzero flow of probability through the system. We prove that black boxing gives a symmetric monoidal dagger functor sending open detailed balanced Markov processes to Lagrangian relations between symplectic vector spaces. This allows us to compute the steady state behavior of an open detailed balanced Markov process from the behaviors of smaller pieces from which it is built. We relate this black box functor to a previously constructed black box functor for circuits.

  4. Lindeberg theorem for Gibbs-Markov dynamics

    NASA Astrophysics Data System (ADS)

    Denker, Manfred; Senti, Samuel; Zhang, Xuan

    2017-12-01

    A dynamical array consists of a family of functions \\{ fn, i: 1≤slant i≤slant k_n, n≥slant 1\\} and a family of initial times \\{τn, i: 1≤slant i≤slant k_n, n≥slant 1\\} . For a dynamical system (X, T) we identify distributional limits for sums of the form for suitable (non-random) constants s_n>0 and an, i\\in { R} . We derive a Lindeberg-type central limit theorem for dynamical arrays. Applications include new central limit theorems for functions which are not locally Lipschitz continuous and central limit theorems for statistical functions of time series obtained from Gibbs-Markov systems. Our results, which hold for more general dynamics, are stated in the context of Gibbs-Markov dynamical systems for convenience.

  5. Susceptible-infected-susceptible epidemics on networks with general infection and cure times.

    PubMed

    Cator, E; van de Bovenkamp, R; Van Mieghem, P

    2013-06-01

    The classical, continuous-time susceptible-infected-susceptible (SIS) Markov epidemic model on an arbitrary network is extended to incorporate infection and curing or recovery times each characterized by a general distribution (rather than an exponential distribution as in Markov processes). This extension, called the generalized SIS (GSIS) model, is believed to have a much larger applicability to real-world epidemics (such as information spread in online social networks, real diseases, malware spread in computer networks, etc.) that likely do not feature exponential times. While the exact governing equations for the GSIS model are difficult to deduce due to their non-Markovian nature, accurate mean-field equations are derived that resemble our previous N-intertwined mean-field approximation (NIMFA) and so allow us to transfer the whole analytic machinery of the NIMFA to the GSIS model. In particular, we establish the criterion to compute the epidemic threshold in the GSIS model. Moreover, we show that the average number of infection attempts during a recovery time is the more natural key parameter, instead of the effective infection rate in the classical, continuous-time SIS Markov model. The relative simplicity of our mean-field results enables us to treat more general types of SIS epidemics, while offering an easier key parameter to measure the average activity of those general viral agents.

  6. Susceptible-infected-susceptible epidemics on networks with general infection and cure times

    NASA Astrophysics Data System (ADS)

    Cator, E.; van de Bovenkamp, R.; Van Mieghem, P.

    2013-06-01

    The classical, continuous-time susceptible-infected-susceptible (SIS) Markov epidemic model on an arbitrary network is extended to incorporate infection and curing or recovery times each characterized by a general distribution (rather than an exponential distribution as in Markov processes). This extension, called the generalized SIS (GSIS) model, is believed to have a much larger applicability to real-world epidemics (such as information spread in online social networks, real diseases, malware spread in computer networks, etc.) that likely do not feature exponential times. While the exact governing equations for the GSIS model are difficult to deduce due to their non-Markovian nature, accurate mean-field equations are derived that resemble our previous N-intertwined mean-field approximation (NIMFA) and so allow us to transfer the whole analytic machinery of the NIMFA to the GSIS model. In particular, we establish the criterion to compute the epidemic threshold in the GSIS model. Moreover, we show that the average number of infection attempts during a recovery time is the more natural key parameter, instead of the effective infection rate in the classical, continuous-time SIS Markov model. The relative simplicity of our mean-field results enables us to treat more general types of SIS epidemics, while offering an easier key parameter to measure the average activity of those general viral agents.

  7. Mapping of uncertainty relations between continuous and discrete time

    NASA Astrophysics Data System (ADS)

    Chiuchiú, Davide; Pigolotti, Simone

    2018-03-01

    Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.

  8. Mapping of uncertainty relations between continuous and discrete time.

    PubMed

    Chiuchiù, Davide; Pigolotti, Simone

    2018-03-01

    Lower bounds on fluctuations of thermodynamic currents depend on the nature of time, discrete or continuous. To understand the physical reason, we compare current fluctuations in discrete-time Markov chains and continuous-time master equations. We prove that current fluctuations in the master equations are always more likely, due to random timings of transitions. This comparison leads to a mapping of the moments of a current between discrete and continuous time. We exploit this mapping to obtain uncertainty bounds. Our results reduce the quests for uncertainty bounds in discrete and continuous time to a single problem.

  9. Stochastic modelling of a single ion channel: an alternating renewal approach with application to limited time resolution.

    PubMed

    Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W

    1988-04-22

    Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.

  10. Challenges in economic modeling of anticancer therapies: an example of modeling the survival benefit of olaparib maintenance therapy for patients with BRCA-mutated platinum-sensitive relapsed ovarian cancer.

    PubMed

    Hettle, Robert; Posnett, John; Borrill, John

    2015-01-01

    The aim of this paper is to describe a four health-state, semi-Markov model structure with health states defined by initiation of subsequent treatment, designed to make best possible use of the data available from a phase 2 clinical trial. The approach is illustrated using data from a sub-group of patients enrolled in a phase 2 clinical trial of olaparib maintenance therapy in patients with platinum-sensitive relapsed ovarian cancer and a BRCA mutation (NCT00753545). A semi-Markov model was developed with four health states: progression-free survival (PFS), first subsequent treatment (FST), second subsequent treatment (SST), and death. Transition probabilities were estimated by fitting survival curves to trial data for time from randomization to FST, time from FST to SST, and time from SST to death. Survival projections generated by the model are broadly consistent with the outcomes observed in the clinical trial. However, limitations of the trial data (small sample size, immaturity of the PFS and overall survival [OS] end-points, and treatment switching) create uncertainty in estimates of survival. The model framework offers a promising approach to evaluating cost-effectiveness of a maintenance therapy for patients with cancer, which may be generalizable to other chronic diseases.

  11. Constructing 1/ωα noise from reversible Markov chains

    NASA Astrophysics Data System (ADS)

    Erland, Sveinung; Greenwood, Priscilla E.

    2007-09-01

    This paper gives sufficient conditions for the output of 1/ωα noise from reversible Markov chains on finite state spaces. We construct several examples exhibiting this behavior in a specified range of frequencies. We apply simple representations of the covariance function and the spectral density in terms of the eigendecomposition of the probability transition matrix. The results extend to hidden Markov chains. We generalize the results for aggregations of AR1-processes of C. W. J. Granger [J. Econometrics 14, 227 (1980)]. Given the eigenvalue function, there is a variety of ways to assign values to the states such that the 1/ωα condition is satisfied. We show that a random walk on a certain state space is complementary to the point process model of 1/ω noise of B. Kaulakys and T. Meskauskas [Phys. Rev. E 58, 7013 (1998)]. Passing to a continuous state space, we construct 1/ωα noise which also has a long memory.

  12. Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks

    PubMed Central

    Gao, Shouwan; Chen, Pengpeng; Huang, Dan; Niu, Qiang

    2016-01-01

    This paper studies the remote Kalman filtering problem for a distributed system setting with multiple sensors that are located at different physical locations. Each sensor encapsulates its own measurement data into one single packet and transmits the packet to the remote filter via a lossy distinct channel. For each communication channel, a time-homogeneous Markov chain is used to model the normal operating condition of packet delivery and losses. Based on the Markov model, a necessary and sufficient condition is obtained, which can guarantee the stability of the mean estimation error covariance. Especially, the stability condition is explicitly expressed as a simple inequality whose parameters are the spectral radius of the system state matrix and transition probabilities of the Markov chains. In contrast to the existing related results, our method imposes less restrictive conditions on systems. Finally, the results are illustrated by simulation examples. PMID:27104541

  13. Information-Theoretic Performance Analysis of Sensor Networks via Markov Modeling of Time Series Data.

    PubMed

    Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K

    2018-06-01

    This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.

  14. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    PubMed

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  15. Real-time antenna fault diagnosis experiments at DSS 13

    NASA Technical Reports Server (NTRS)

    Mellstrom, J.; Pierson, C.; Smyth, P.

    1992-01-01

    Experimental results obtained when a previously described fault diagnosis system was run online in real time at the 34-m beam waveguide antenna at Deep Space Station (DSS) 13 are described. Experimental conditions and the quality of results are described. A neural network model and a maximum-likelihood Gaussian classifier are compared with and without a Markov component to model temporal context. At the rate of a state update every 6.4 seconds, over a period of roughly 1 hour, the neural-Markov system had zero errors (incorrect state estimates) while monitoring both faulty and normal operations. The overall results indicate that the neural-Markov combination is the most accurate model and has significant practical potential.

  16. Hidden Markov models for evolution and comparative genomics analysis.

    PubMed

    Bykova, Nadezda A; Favorov, Alexander V; Mironov, Andrey A

    2013-01-01

    The problem of reconstruction of ancestral states given a phylogeny and data from extant species arises in a wide range of biological studies. The continuous-time Markov model for the discrete states evolution is generally used for the reconstruction of ancestral states. We modify this model to account for a case when the states of the extant species are uncertain. This situation appears, for example, if the states for extant species are predicted by some program and thus are known only with some level of reliability; it is common for bioinformatics field. The main idea is formulation of the problem as a hidden Markov model on a tree (tree HMM, tHMM), where the basic continuous-time Markov model is expanded with the introduction of emission probabilities of observed data (e.g. prediction scores) for each underlying discrete state. Our tHMM decoding algorithm allows us to predict states at the ancestral nodes as well as to refine states at the leaves on the basis of quantitative comparative genomics. The test on the simulated data shows that the tHMM approach applied to the continuous variable reflecting the probabilities of the states (i.e. prediction score) appears to be more accurate then the reconstruction from the discrete states assignment defined by the best score threshold. We provide examples of applying our model to the evolutionary analysis of N-terminal signal peptides and transcription factor binding sites in bacteria. The program is freely available at http://bioinf.fbb.msu.ru/~nadya/tHMM and via web-service at http://bioinf.fbb.msu.ru/treehmmweb.

  17. Time-varying nonstationary multivariate risk analysis using a dynamic Bayesian copula

    NASA Astrophysics Data System (ADS)

    Sarhadi, Ali; Burn, Donald H.; Concepción Ausín, María.; Wiper, Michael P.

    2016-03-01

    A time-varying risk analysis is proposed for an adaptive design framework in nonstationary conditions arising from climate change. A Bayesian, dynamic conditional copula is developed for modeling the time-varying dependence structure between mixed continuous and discrete multiattributes of multidimensional hydrometeorological phenomena. Joint Bayesian inference is carried out to fit the marginals and copula in an illustrative example using an adaptive, Gibbs Markov Chain Monte Carlo (MCMC) sampler. Posterior mean estimates and credible intervals are provided for the model parameters and the Deviance Information Criterion (DIC) is used to select the model that best captures different forms of nonstationarity over time. This study also introduces a fully Bayesian, time-varying joint return period for multivariate time-dependent risk analysis in nonstationary environments. The results demonstrate that the nature and the risk of extreme-climate multidimensional processes are changed over time under the impact of climate change, and accordingly the long-term decision making strategies should be updated based on the anomalies of the nonstationary environment.

  18. Detection method of financial crisis in Indonesia using MSGARCH models based on banking condition indicators

    NASA Astrophysics Data System (ADS)

    Sugiyanto; Zukhronah, E.; Sari, S. P.

    2018-05-01

    Financial crisis has hit Indonesia for several times resulting the needs for an early detection system to minimize the impact. One of many methods that can be used to detect the crisis is to model the crisis indicators using combination of volatility and Markov switching models [5]. There are some indicators that can be used to detect financial crisis. Three of them are the difference between interest rate on deposit and lending, the real interest rate on deposit, and the difference between real BI rate and real Fed rate which can be referred as banking condition indicators. Volatility model used to overcome the conditional variance that change over time. Combination of volatility and Markov switching models used to detect condition change on the data. The smoothed probability from the combined models can be used to detect the crisis. This research resulted that the best combined volatility and Markov switching models for the three indicators are MS-GARCH(3,1,1) models with three states assumption. Crises in mid of 1997 until 1998 has successfully detected with a certain range of smoothed probability value for the three indicators.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, H.

    In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications ofmore » the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability).« less

  20. Retrospective estimation of breeding phenology of American Goldfinch (Carduelis tristis) using pattern oriented modeling

    EPA Science Inventory

    Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...

  1. Canonical Structure and Orthogonality of Forces and Currents in Irreversible Markov Chains

    NASA Astrophysics Data System (ADS)

    Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes

    2018-03-01

    We discuss a canonical structure that provides a unifying description of dynamical large deviations for irreversible finite state Markov chains (continuous time), Onsager theory, and Macroscopic Fluctuation Theory (MFT). For Markov chains, this theory involves a non-linear relation between probability currents and their conjugate forces. Within this framework, we show how the forces can be split into two components, which are orthogonal to each other, in a generalised sense. This splitting allows a decomposition of the pathwise rate function into three terms, which have physical interpretations in terms of dissipation and convergence to equilibrium. Similar decompositions hold for rate functions at level 2 and level 2.5. These results clarify how bounds on entropy production and fluctuation theorems emerge from the underlying dynamical rules. We discuss how these results for Markov chains are related to similar structures within MFT, which describes hydrodynamic limits of such microscopic models.

  2. Transformations based on continuous piecewise-affine velocity fields

    DOE PAGES

    Freifeld, Oren; Hauberg, Soren; Batmanghelich, Kayhan; ...

    2017-01-11

    Here, we propose novel finite-dimensional spaces of well-behaved Rn → Rn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization overmore » monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available.« less

  3. Transformations Based on Continuous Piecewise-Affine Velocity Fields

    PubMed Central

    Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan; Fisher, Jonn W.

    2018-01-01

    We propose novel finite-dimensional spaces of well-behaved ℝn → ℝn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization over monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available. PMID:28092517

  4. Damage evaluation by a guided wave-hidden Markov model based method

    NASA Astrophysics Data System (ADS)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  5. Passive synchronization for Markov jump genetic oscillator networks with time-varying delays.

    PubMed

    Lu, Li; He, Bing; Man, Chuntao; Wang, Shun

    2015-04-01

    In this paper, the synchronization problem of coupled Markov jump genetic oscillator networks with time-varying delays and external disturbances is investigated. By introducing the drive-response concept, a novel mode-dependent control scheme is proposed, which guarantees that the synchronization can be achieved. By applying the Lyapunov-Krasovskii functional method and stochastic analysis, sufficient conditions are established based on passivity theory in terms of linear matrix inequalities. A numerical example is provided to demonstrate the effectiveness of our theoretical results. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    PubMed Central

    Chevalier, Michael W.; El-Samad, Hana

    2014-01-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled. PMID:25481130

  7. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    NASA Astrophysics Data System (ADS)

    Chevalier, Michael W.; El-Samad, Hana

    2014-12-01

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation times of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.

  8. Hidden Semi-Markov Models and Their Application

    NASA Astrophysics Data System (ADS)

    Beyreuther, M.; Wassermann, J.

    2008-12-01

    In the framework of detection and classification of seismic signals there are several different approaches. Our choice for a more robust detection and classification algorithm is to adopt Hidden Markov Models (HMM), a technique showing major success in speech recognition. HMM provide a powerful tool to describe highly variable time series based on a double stochastic model and therefore allow for a broader class description than e.g. template based pattern matching techniques. Being a fully probabilistic model, HMM directly provide a confidence measure of an estimated classification. Furthermore and in contrast to classic artificial neuronal networks or support vector machines, HMM are incorporating the time dependence explicitly in the models thus providing a adequate representation of the seismic signal. As the majority of detection algorithms, HMM are not based on the time and amplitude dependent seismogram itself but on features estimated from the seismogram which characterize the different classes. Features, or in other words characteristic functions, are e.g. the sonogram bands, instantaneous frequency, instantaneous bandwidth or centroid time. In this study we apply continuous Hidden Semi-Markov Models (HSMM), an extension of continuous HMM. The duration probability of a HMM is an exponentially decaying function of the time, which is not a realistic representation of the duration of an earthquake. In contrast HSMM use Gaussians as duration probabilities, which results in an more adequate model. The HSMM detection and classification system is running online as an EARTHWORM module at the Bavarian Earthquake Service. Here the signals that are to be classified simply differ in epicentral distance. This makes it possible to easily decide whether a classification is correct or wrong and thus allows to better evaluate the advantages and disadvantages of the proposed algorithm. The evaluation is based on several month long continuous data and the results are additionally compared to the previously published discrete HMM, continuous HMM and a classic STA/LTA. The intermediate evaluation results are very promising.

  9. Dynamical AdS strings across horizons

    DOE PAGES

    Ishii, Takaaki; Murata, Keiju

    2016-03-01

    We examine the nonlinear classical dynamics of a fundamental string in anti-deSitter spacetime. The string is dual to the flux tube between an external quark-antiquark pair in $N = 4$ super Yang-Mills theory. We perturb the string by shaking the endpoints and compute its time evolution numerically. We find that with sufficiently strong perturbations the string continues extending and plunges into the Poincare´ horizon. In the evolution, effective horizons are also dynamically created on the string worldsheet. The quark and antiquark are thus causally disconnected, and the string transitions to two straight strings. The forces acting on the endpoints vanishmore » with a power law whose slope depends on the perturbations. Lastly, the condition for this transition to occur is that energy injection exceeds the static energy between the quark-antiquark pair.« less

  10. Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts.

    PubMed

    Jia, Chen

    2017-09-01

    Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.

  11. Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts

    NASA Astrophysics Data System (ADS)

    Jia, Chen

    2017-09-01

    Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.

  12. Exact Solutions to Time-dependent Mdps

    NASA Technical Reports Server (NTRS)

    Boyan, Justin A.; Littman, Michael L.

    2000-01-01

    We describe an extension of the Markov decision process model in which a continuous time dimension is included in the state space. This allows for the representation and exact solution of a wide range of problems in which transitions or rewards vary over time. We examine problems based on route planning with public transportation and telescope observation scheduling.

  13. A fast exact simulation method for a class of Markov jump processes.

    PubMed

    Li, Yao; Hu, Lili

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.

  14. Synchronization of discrete-time neural networks with delays and Markov jump topologies based on tracker information.

    PubMed

    Yang, Xinsong; Feng, Zhiguo; Feng, Jianwen; Cao, Jinde

    2017-01-01

    In this paper, synchronization in an array of discrete-time neural networks (DTNNs) with time-varying delays coupled by Markov jump topologies is considered. It is assumed that the switching information can be collected by a tracker with a certain probability and transmitted from the tracker to controller precisely. Then the controller selects suitable control gains based on the received switching information to synchronize the network. This new control scheme makes full use of received information and overcomes the shortcomings of mode-dependent and mode-independent control schemes. Moreover, the proposed control method includes both the mode-dependent and mode-independent control techniques as special cases. By using linear matrix inequality (LMI) method and designing new Lyapunov functionals, delay-dependent conditions are derived to guarantee that the DTNNs with Markov jump topologies to be asymptotically synchronized. Compared with existing results on Markov systems which are obtained by separately using mode-dependent and mode-independent methods, our result has great flexibility in practical applications. Numerical simulations are finally given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A variational method for analyzing limit cycle oscillations in stochastic hybrid systems

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.; MacLaurin, James

    2018-06-01

    Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .

  16. A study on the real-time reliability of on-board equipment of train control system

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Li, Shiwei

    2018-05-01

    Real-time reliability evaluation is conducive to establishing a condition based maintenance system for the purpose of guaranteeing continuous train operation. According to the inherent characteristics of the on-board equipment, the connotation of reliability evaluation of on-board equipment is defined and the evaluation index of real-time reliability is provided in this paper. From the perspective of methodology and practical application, the real-time reliability of the on-board equipment is discussed in detail, and the method of evaluating the realtime reliability of on-board equipment at component level based on Hidden Markov Model (HMM) is proposed. In this method the performance degradation data is used directly to realize the accurate perception of the hidden state transition process of on-board equipment, which can achieve a better description of the real-time reliability of the equipment.

  17. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk

    PubMed Central

    Wei, Shaoceng; Kryscio, Richard J.

    2015-01-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk (Figure 1). Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript we apply a Semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. PMID:24821001

  18. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk.

    PubMed

    Wei, Shaoceng; Kryscio, Richard J

    2016-12-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.

  19. NonMarkov Ito Processes with 1- state memory

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2010-08-01

    A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.

  20. Exact goodness-of-fit tests for Markov chains.

    PubMed

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.

  1. ModFossa: A library for modeling ion channels using Python.

    PubMed

    Ferneyhough, Gareth B; Thibealut, Corey M; Dascalu, Sergiu M; Harris, Frederick C

    2016-06-01

    The creation and simulation of ion channel models using continuous-time Markov processes is a powerful and well-used tool in the field of electrophysiology and ion channel research. While several software packages exist for the purpose of ion channel modeling, most are GUI based, and none are available as a Python library. In an attempt to provide an easy-to-use, yet powerful Markov model-based ion channel simulator, we have developed ModFossa, a Python library supporting easy model creation and stimulus definition, complete with a fast numerical solver, and attractive vector graphics plotting.

  2. A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains

    NASA Astrophysics Data System (ADS)

    Gan, Tingyue; Cameron, Maria

    2017-06-01

    Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.

  3. A fast exact simulation method for a class of Markov jump processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yao, E-mail: yaoli@math.umass.edu; Hu, Lili, E-mail: lilyhu86@gmail.com

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze itsmore » properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.« less

  4. Necessary conditions for weighted mean convergence of Lagrange interpolation for exponential weights

    NASA Astrophysics Data System (ADS)

    Damelin, S. B.; Jung, H. S.; Kwon, K. H.

    2001-07-01

    Given a continuous real-valued function f which vanishes outside a fixed finite interval, we establish necessary conditions for weighted mean convergence of Lagrange interpolation for a general class of even weights w which are of exponential decay on the real line or at the endpoints of (-1,1).

  5. Pitch angle scattering of relativistic electrons from stationary magnetic waves: Continuous Markov process and quasilinear theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemons, Don S.

    2012-01-15

    We develop a Markov process theory of charged particle scattering from stationary, transverse, magnetic waves. We examine approximations that lead to quasilinear theory, in particular the resonant diffusion approximation. We find that, when appropriate, the resonant diffusion approximation simplifies the result of the weak turbulence approximation without significant further restricting the regime of applicability. We also explore a theory generated by expanding drift and diffusion rates in terms of a presumed small correlation time. This small correlation time expansion leads to results valid for relatively small pitch angle and large wave energy density - a regime that may govern pitchmore » angle scattering of high-energy electrons into the geomagnetic loss cone.« less

  6. Integrated Thermal Response Modeling System For Hypersonic Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Chen, Y.-K.; Milos, F. S.; Partridge, Harry (Technical Monitor)

    2000-01-01

    We describe all extension of the Markov decision process model in which a continuous time dimension is included ill the state space. This allows for the representation and exact solution of a wide range of problems in which transitions or rewards vary over time. We examine problems based on route planning with public transportation and telescope observation scheduling.

  7. Attention Switching during Scene Perception: How Goals Influence the Time Course of Eye Movements across Advertisements

    ERIC Educational Resources Information Center

    Wedel, Michel; Pieters, Rik; Liechty, John

    2008-01-01

    Eye movements across advertisements express a temporal pattern of bursts of respectively relatively short and long saccades, and this pattern is systematically influenced by activated scene perception goals. This was revealed by a continuous-time hidden Markov model applied to eye movements of 220 participants exposed to 17 ads under a…

  8. Mind wandering at the fingertips: automatic parsing of subjective states based on response time variability

    PubMed Central

    Bastian, Mikaël; Sackur, Jérôme

    2013-01-01

    Research from the last decade has successfully used two kinds of thought reports in order to assess whether the mind is wandering: random thought-probes and spontaneous reports. However, none of these two methods allows any assessment of the subjective state of the participant between two reports. In this paper, we present a step by step elaboration and testing of a continuous index, based on response time variability within Sustained Attention to Response Tasks (N = 106, for a total of 10 conditions). We first show that increased response time variability predicts mind wandering. We then compute a continuous index of response time variability throughout full experiments and show that the temporal position of a probe relative to the nearest local peak of the continuous index is predictive of mind wandering. This suggests that our index carries information about the subjective state of the subject even when he or she is not probed, and opens the way for on-line tracking of mind wandering. Finally we proceed a step further and infer the internal attentional states on the basis of the variability of response times. To this end we use the Hidden Markov Model framework, which allows us to estimate the durations of on-task and off-task episodes. PMID:24046753

  9. Voluntary Control of Residual Antagonistic Muscles in Transtibial Amputees: Feedforward Ballistic Contractions and Implications for Direct Neural Control of Powered Lower Limb Prostheses.

    PubMed

    Huang, Stephanie; Huang, He

    2018-04-01

    Discrete, rapid (i.e., ballistic like) muscle activation patterns have been observed in ankle muscles (i.e., plantar flexors and dorsiflexors) of able-bodied individuals during voluntary posture control. This observation motivated us to investigate whether transtibial amputees are capable of generating such a ballistic-like activation pattern accurately using their residual ankle muscles in order to assess whether the volitional postural control of a powered ankle prosthesis using proportional myoelectric control via residual muscles could be feasible. In this paper, we asked ten transtibial amputees to generate ballistic-like activation patterns using their residual lateral gastrocnemius and residual tibialis anterior to control a computer cursor via proportional myoelectric control to hit targets positioned at 20% and 40% of maximum voluntary contraction of the corresponding residual muscle. During practice conditions, we asked amputees to hit a single target repeatedly. During testing conditions, we asked amputees to hit a random sequence of targets. We compared movement time to target and end-point accuracy. We also examined motor recruitment synchronization via time-frequency representations of residual muscle activation. The result showed that median end-point error ranged from -0.6% to 1% maximum voluntary contraction across subjects during practice, which was significantly lower compared to testing ( ). Average movement time for all amputees was 242 ms during practice and 272 ms during testing. Motor recruitment synchronization varied across subjects, and amputees with the highest synchronization achieved the fastest movement times. End-point accuracy was independent of movement time. Results suggest that it is feasible for transtibial amputees to generate ballistic control signals using their residual muscles. Future work on volitional control of powered power ankle prostheses might consider anticipatory postural control based on ballistic-like residual muscle activation patterns and direct continuous proportional myoelectric control.

  10. A Hybrid Secure Scheme for Wireless Sensor Networks against Timing Attacks Using Continuous-Time Markov Chain and Queueing Model.

    PubMed

    Meng, Tianhui; Li, Xiaofan; Zhang, Sha; Zhao, Yubin

    2016-09-28

    Wireless sensor networks (WSNs) have recently gained popularity for a wide spectrum of applications. Monitoring tasks can be performed in various environments. This may be beneficial in many scenarios, but it certainly exhibits new challenges in terms of security due to increased data transmission over the wireless channel with potentially unknown threats. Among possible security issues are timing attacks, which are not prevented by traditional cryptographic security. Moreover, the limited energy and memory resources prohibit the use of complex security mechanisms in such systems. Therefore, balancing between security and the associated energy consumption becomes a crucial challenge. This paper proposes a secure scheme for WSNs while maintaining the requirement of the security-performance tradeoff. In order to proceed to a quantitative treatment of this problem, a hybrid continuous-time Markov chain (CTMC) and queueing model are put forward, and the tradeoff analysis of the security and performance attributes is carried out. By extending and transforming this model, the mean time to security attributes failure is evaluated. Through tradeoff analysis, we show that our scheme can enhance the security of WSNs, and the optimal rekeying rate of the performance and security tradeoff can be obtained.

  11. A Hybrid Secure Scheme for Wireless Sensor Networks against Timing Attacks Using Continuous-Time Markov Chain and Queueing Model

    PubMed Central

    Meng, Tianhui; Li, Xiaofan; Zhang, Sha; Zhao, Yubin

    2016-01-01

    Wireless sensor networks (WSNs) have recently gained popularity for a wide spectrum of applications. Monitoring tasks can be performed in various environments. This may be beneficial in many scenarios, but it certainly exhibits new challenges in terms of security due to increased data transmission over the wireless channel with potentially unknown threats. Among possible security issues are timing attacks, which are not prevented by traditional cryptographic security. Moreover, the limited energy and memory resources prohibit the use of complex security mechanisms in such systems. Therefore, balancing between security and the associated energy consumption becomes a crucial challenge. This paper proposes a secure scheme for WSNs while maintaining the requirement of the security-performance tradeoff. In order to proceed to a quantitative treatment of this problem, a hybrid continuous-time Markov chain (CTMC) and queueing model are put forward, and the tradeoff analysis of the security and performance attributes is carried out. By extending and transforming this model, the mean time to security attributes failure is evaluated. Through tradeoff analysis, we show that our scheme can enhance the security of WSNs, and the optimal rekeying rate of the performance and security tradeoff can be obtained. PMID:27690042

  12. Alteration of neural action potential patterns by axonal stimulation: the importance of stimulus location.

    PubMed

    Crago, Patrick E; Makowski, Nathaniel S

    2014-10-01

    Stimulation of peripheral nerves is often superimposed on ongoing motor and sensory activity in the same axons, without a quantitative model of the net action potential train at the axon endpoint. We develop a model of action potential patterns elicited by superimposing constant frequency axonal stimulation on the action potentials arriving from a physiologically activated neural source. The model includes interactions due to collision block, resetting of the neural impulse generator, and the refractory period of the axon at the point of stimulation. Both the mean endpoint firing rate and the probability distribution of the action potential firing periods depend strongly on the relative firing rates of the two sources and the intersite conduction time between them. When the stimulus rate exceeds the neural rate, neural action potentials do not reach the endpoint and the rate of endpoint action potentials is the same as the stimulus rate, regardless of the intersite conduction time. However, when the stimulus rate is less than the neural rate, and the intersite conduction time is short, the two rates partially sum. Increases in stimulus rate produce non-monotonic increases in endpoint rate and continuously increasing block of neurally generated action potentials. Rate summation is reduced and more neural action potentials are blocked as the intersite conduction time increases. At long intersite conduction times, the endpoint rate simplifies to being the maximum of either the neural or the stimulus rate. This study highlights the potential of increasing the endpoint action potential rate and preserving neural information transmission by low rate stimulation with short intersite conduction times. Intersite conduction times can be decreased with proximal stimulation sites for muscles and distal stimulation sites for sensory endings. The model provides a basis for optimizing experiments and designing neuroprosthetic interventions involving motor or sensory stimulation.

  13. Phase 3 randomised study of the proposed biosimilar adalimumab GP2017 in psoriasis - impact of multiple switches.

    PubMed

    Blauvelt, A; Lacour, J-P; Fowler, J F; Weinberg, J M; Gospodinov, D; Schuck, E; Jauch-Lembach, J; Balfour, A; Leonardi, C L

    2018-06-19

    The impact of multiple switches between GP2017 and reference adalimumab (ref-ADMB) was assessed following the demonstration of equivalent efficacy and similar safety and immunogenicity, in adult patients with active, clinically stable, moderate-to-severe plaque psoriasis. This 51-week double-blinded, phase 3 study randomly assigned patients to GP2017 (N=231) or ref-ADMB (N=234) 80 mg subcutaneously at Week 0, then 40 mg biweekly from Week 1. At Week 17, patients were re-randomised to switch (n=126) or continue (n=253) treatment. Primary endpoint: patients achieving Psoriasis Area and Severity Index (PASI)75 at Week 16 (equivalence confirmed if the 95% confidence interval [CI] for the difference in PASI75 between treatments was ±18%). Key secondary endpoint: change from baseline to Week 16 in continuous PASI. Other endpoints: PASI over time, PASI 50/75/90/100, pharmacokinetics, safety, tolerability and immunogenicity for the switched and continued treatment groups. Equivalent efficacy between GP2017 and ref-ADMB was confirmed for the primary (66.8% and 65.0%, respectively; 95% CI, -7.46, 11.15) and key secondary (-60.7% and -61.5%, respectively; 95% CI, -3.15, 4.84) endpoints. PASI improved over time and was similar between treatment groups at Week 16, and the switched/continued groups from Weeks 17-51. There were no relevant safety or immunogenicity differences between GP2017 and ref-ADMB at Week 16, or the switched/continued groups from Weeks 17-51. No hypersensitivity to adalimumab was reported upon switching. Following the demonstration of GP2017 biosimilarity to ref-ADMB, switching up to four times between GP2017 and ref-ADMB had no detectable impact on efficacy, safety or immunogenicity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Optimized nested Markov chain Monte Carlo sampling: theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less

  15. The Markov process admits a consistent steady-state thermodynamic formalism

    NASA Astrophysics Data System (ADS)

    Peng, Liangrong; Zhu, Yi; Hong, Liu

    2018-01-01

    The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.

  16. Feynman-Kac formula for stochastic hybrid systems.

    PubMed

    Bressloff, Paul C

    2017-01-01

    We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.

  17. Time-Resolved Magneto-Optical Imaging of Superconducting YBCO Thin Films in the High-Frequency AC Current Regime

    NASA Astrophysics Data System (ADS)

    Frey, Alexander

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  18. A master equation and moment approach for biochemical systems with creation-time-dependent bimolecular rate functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chevalier, Michael W., E-mail: Michael.Chevalier@ucsf.edu; El-Samad, Hana, E-mail: Hana.El-Samad@ucsf.edu

    Noise and stochasticity are fundamental to biology and derive from the very nature of biochemical reactions where thermal motion of molecules translates into randomness in the sequence and timing of reactions. This randomness leads to cell-to-cell variability even in clonal populations. Stochastic biochemical networks have been traditionally modeled as continuous-time discrete-state Markov processes whose probability density functions evolve according to a chemical master equation (CME). In diffusion reaction systems on membranes, the Markov formalism, which assumes constant reaction propensities is not directly appropriate. This is because the instantaneous propensity for a diffusion reaction to occur depends on the creation timesmore » of the molecules involved. In this work, we develop a chemical master equation for systems of this type. While this new CME is computationally intractable, we make rational dimensional reductions to form an approximate equation, whose moments are also derived and are shown to yield efficient, accurate results. This new framework forms a more general approach than the Markov CME and expands upon the realm of possible stochastic biochemical systems that can be efficiently modeled.« less

  19. An Improved Model of Nonuniform Coleochaete Cell Division.

    PubMed

    Wang, Yuandi; Cong, Jinyu

    2016-08-01

    Cell division is a key biological process in which cells divide forming new daughter cells. In the present study, we investigate continuously how a Coleochaete cell divides by introducing a modified differential equation model in parametric equation form. We discuss both the influence of "dead" cells and the effects of various end-points on the formation of the new cells' boundaries. We find that the boundary condition on the free end-point is different from that on the fixed end-point; the former has a direction perpendicular to the surface. It is also shown that the outer boundaries of new cells are arc-shaped. The numerical experiments and theoretical analyses for this model to construct the outer boundary are given.

  20. Open Markov Processes and Reaction Networks

    NASA Astrophysics Data System (ADS)

    Swistock Pollard, Blake Stephen

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  1. Predictive Rate-Distortion for Infinite-Order Markov Processes

    NASA Astrophysics Data System (ADS)

    Marzen, Sarah E.; Crutchfield, James P.

    2016-06-01

    Predictive rate-distortion analysis suffers from the curse of dimensionality: clustering arbitrarily long pasts to retain information about arbitrarily long futures requires resources that typically grow exponentially with length. The challenge is compounded for infinite-order Markov processes, since conditioning on finite sequences cannot capture all of their past dependencies. Spectral arguments confirm a popular intuition: algorithms that cluster finite-length sequences fail dramatically when the underlying process has long-range temporal correlations and can fail even for processes generated by finite-memory hidden Markov models. We circumvent the curse of dimensionality in rate-distortion analysis of finite- and infinite-order processes by casting predictive rate-distortion objective functions in terms of the forward- and reverse-time causal states of computational mechanics. Examples demonstrate that the resulting algorithms yield substantial improvements.

  2. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  3. Convergence yet Continued Complexity: A Systematic Review and Critique of Health Economic Models of Relapsing-Remitting Multiple Sclerosis in the United Kingdom.

    PubMed

    Allen, Felicity; Montgomery, Stephen; Maruszczak, Maciej; Kusel, Jeanette; Adlard, Nicholas

    2015-09-01

    Several disease-modifying therapies have marketing authorizations for the treatment of relapsing-remitting multiple sclerosis (RRMS). Given their appraisal by the National Institute for Health and Care Excellence, the objective was to systematically identify and critically evaluate the structures and assumptions used in health economic models of disease-modifying therapies for RRMS in the United Kingdom. Embase, MEDLINE, The Cochrane Library, and the National Institute for Health and Care Excellence Web site were searched systematically on March 3, 2014, to identify articles relating to health economic models in RRMS with a UK perspective. Data sources, techniques, and assumptions of the included models were extracted, compared, and critically evaluated. Of 386 results, 26 full texts were evaluated, leading to the inclusion of 18 articles (relating to 12 models). Early models varied considerably in method and structure, but convergence over time toward a Markov model with states based on disability score, a 1-year cycle length, and a lifetime time horizon was apparent. Recent models also allowed for disability improvement within the natural history of the condition. Considerable variety remains, with increasing numbers of comparators, the need for treatment sequencing, and different assumptions around efficacy waning and treatment withdrawal. Despite convergence over time to a similar Markov structure, there are still significant discrepancies between health economic models of RRMS in the United Kingdom. Differing methods, assumptions, and data sources render the comparison of model implementation and results problematic. The commonly used Markov structure leads to problems such as incapability to deal with heterogeneous populations and multiplying complexity with the addition of treatment sequences; these would best be solved by using alternative models such as discrete event simulations. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  4. Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory

    DTIC Science & Technology

    2016-05-12

    valued times series from a sample. (A practical algorithm to compute the estimator is a work in progress.) Third, finitely-valued spatial processes...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics; time series ; Markov chains; random...proved. Second, a statistical method is developed to estimate the memory depth of discrete- time and continuously-valued times series from a sample. (A

  5. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    NASA Astrophysics Data System (ADS)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  6. Characterization of the rat exploratory behavior in the elevated plus-maze with Markov chains.

    PubMed

    Tejada, Julián; Bosco, Geraldine G; Morato, Silvio; Roque, Antonio C

    2010-11-30

    The elevated plus-maze is an animal model of anxiety used to study the effect of different drugs on the behavior of the animal. It consists of a plus-shaped maze with two open and two closed arms elevated 50cm from the floor. The standard measures used to characterize exploratory behavior in the elevated plus-maze are the time spent and the number of entries in the open arms. In this work, we use Markov chains to characterize the exploratory behavior of the rat in the elevated plus-maze under three different conditions: normal and under the effects of anxiogenic and anxiolytic drugs. The spatial structure of the elevated plus-maze is divided into squares, which are associated with states of a Markov chain. By counting the frequencies of transitions between states during 5-min sessions in the elevated plus-maze, we constructed stochastic matrices for the three conditions studied. The stochastic matrices show specific patterns, which correspond to the observed behaviors of the rat under the three different conditions. For the control group, the stochastic matrix shows a clear preference for places in the closed arms. This preference is enhanced for the anxiogenic group. For the anxiolytic group, the stochastic matrix shows a pattern similar to a random walk. Our results suggest that Markov chains can be used together with the standard measures to characterize the rat behavior in the elevated plus-maze. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. Decomposition of conditional probability for high-order symbolic Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  8. Decomposition of conditional probability for high-order symbolic Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  9. Large deviations and mixing for dissipative PDEs with unbounded random kicks

    NASA Astrophysics Data System (ADS)

    Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.

    2018-02-01

    We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.

  10. A Real-Time Cardiac Arrhythmia Classification System with Wearable Sensor Networks

    PubMed Central

    Hu, Sheng; Wei, Hongxing; Chen, Youdong; Tan, Jindong

    2012-01-01

    Long term continuous monitoring of electrocardiogram (ECG) in a free living environment provides valuable information for prevention on the heart attack and other high risk diseases. This paper presents the design of a real-time wearable ECG monitoring system with associated cardiac arrhythmia classification algorithms. One of the striking advantages is that ECG analog front-end and on-node digital processing are designed to remove most of the noise and bias. In addition, the wearable sensor node is able to monitor the patient's ECG and motion signal in an unobstructive way. To realize the real-time medical analysis, the ECG is digitalized and transmitted to a smart phone via Bluetooth. On the smart phone, the ECG waveform is visualized and a novel layered hidden Markov model is seamlessly integrated to classify multiple cardiac arrhythmias in real time. Experimental results demonstrate that the clean and reliable ECG waveform can be captured in multiple stressed conditions and the real-time classification on cardiac arrhythmia is competent to other workbenches. PMID:23112746

  11. Reliability Evaluation for Clustered WSNs under Malware Propagation

    PubMed Central

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C.; Yu, Shui; Cao, Qiying

    2016-01-01

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN. PMID:27294934

  12. Reliability Evaluation for Clustered WSNs under Malware Propagation.

    PubMed

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying

    2016-06-10

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  13. Stochastic Adaptive Estimation and Control.

    DTIC Science & Technology

    1994-10-26

    Marcus, "Language Stability and Stabilizability of Discrete Event Dynamical Systems ," SIAM Journal on Control and Optimization, 31, September 1993...in the hierarchical control of flexible manufacturing systems ; in this problem, the model involves a hybrid process in continuous time whose state is...of the average cost control problem for discrete- time Markov processes. Our exposition covers from finite to Borel state and action spaces and

  14. A novel framework to simulating non-stationary, non-linear, non-Normal hydrological time series using Markov Switching Autoregressive Models

    NASA Astrophysics Data System (ADS)

    Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.

    2012-12-01

    In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.

  15. Responsiveness of efficacy endpoints in clinical trials with over the counter analgesics for headache.

    PubMed

    Aicher, Bernhard; Peil, Hubertus; Peil, Barbara; Diener, Hans-Christoph

    2012-10-01

    To quantify and compare the responsiveness within the meaning of clinical relevance of efficacy endpoints in a clinical trial with over the counter (OTC) analgesics for headache. Efficacy endpoints and observed differences in clinical trials need to be clinically meaningful and mirror the change in the clinical status of a patient. This must be demonstrated for the specific disease indication and the particular patient population based on the application of treatments with proven efficacy. Patient's global efficacy assessment during two study phases (pre-phase and treatment phase) was used to classify patients as satisfied or non-satisfied with the efficacy of their medication. The analysis is based on 1734 patients included in the efficacy analysis of a randomized, placebo-controlled, double-blind, multi-centre parallel group trial with six treatment arms. Based on this classification and the pain intensity recorded by the patients on a 100 mm visual analogue scale, group differences by assessment categories and receiver operating characteristic (ROC) curve methods were used to quantify responsiveness of the efficacy endpoints 'time to 50% pain relief', 'time until reduction of pain intensity to 10 mm', 'weighted sum of pain intensity difference' (%SPIDweighted), 'pain intensity difference (PID) relative to baseline at 2 hours', and 'pain-free at 2 hours'. Clinically relevant differences between patients satisfied and non-satisfied with the treatment were observed for all efficacy endpoints. Patients with the highest rating of efficacy had the fastest and strongest pain relief. In comparison, patients assessing efficacy as 'less good' reached a 50% pain relief on average nearly an hour later than those scoring efficacy as at least 'good'. Simultaneously, their extent of pain relief was only half as great 2 hours after medication intake. Patients scoring efficacy as 'poor' experienced practically no pain relief within the 4 hour observation interval. ROC curve calculations confirmed an adequate responsiveness for all continuous endpoints. The following cut-off points for differentiating between satisfied and non-satisfied patients were deduced from the data in the pre- and treatment phase, respectively: 'time to 50% pain relief' 1:10 and 1:31 h:min, 'time until reduction of pain intensity to 10 mm' 2:40 and 3:00 h:min, '%SPIDweighted' 68 and 64%, 'PID at 2 hours' 35 and 35 mm. The sensitivity and specificity based on these cut-off points ranged from 70 to 79%. The binary endpoint 'pain-free at 2 hours' showed a clearly higher specificity (80 and 87%) than sensitivity (65 and 61%) in the pre- and treatment phase, respectively. When global assessment of efficacy by the patient was used as external criterion, ROC curve calculations confirmed a high responsiveness for all efficacy endpoints included in this study. Clinically relevant differences between patients satisfied and non-satisfied with the treatment were observed. The endpoint '%SPIDweighted' proved slightly but consistently superior to the other endpoints. SPID and %SPIDweighted are not easy to interpret and the time course of pain reduction is of high importance for the patients in the treatment of acute pain, including headache. The endpoint 'pain-free at 2 hours' showed the expected high specificity, but at the cost of a concurrently low sensitivity and clearly makes less use of the available information than the endpoint 'time to 50% pain reduction', which combines the highly relevant aspects of time course and extent of pain reduction. Responsiveness, the ability of an outcome measure to detect clinically important changes in a specific condition of a patient, should be added in future revisions of IHS guidelines for clinical trials in headache disorders.

  16. DEVELOPMENT AND PEER REVIEW OF TIME-TO-EFFECT MODELS FOR THE ANALYSIS OF NEUROTOXICITY AND OTHER TIME DEPENDENT DATA

    EPA Science Inventory

    Neurobehavioral studies pose unique challenges for dose-response modeling, including small sample size and relatively large intra-subject variation, repeated measurements over time, multiple endpoints with both continuous and ordinal scales, and time dependence of risk characteri...

  17. Joint coverage probability in a simulation study on Continuous-Time Markov Chain parameter estimation.

    PubMed

    Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S

    2015-01-01

    Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.

  18. Alteration of neural action potential patterns by axonal stimulation: the importance of stimulus location

    PubMed Central

    Crago, Patrick E; Makowski, Nathan S

    2014-01-01

    Objective Stimulation of peripheral nerves is often superimposed on ongoing motor and sensory activity in the same axons, without a quantitative model of the net action potential train at the axon endpoint. Approach We develop a model of action potential patterns elicited by superimposing constant frequency axonal stimulation on the action potentials arriving from a physiologically activated neural source. The model includes interactions due to collision block, resetting of the neural impulse generator, and the refractory period of the axon at the point of stimulation. Main Results Both the mean endpoint firing rate and the probability distribution of the action potential firing periods depend strongly on the relative firing rates of the two sources and the intersite conduction time between them. When the stimulus rate exceeds the neural rate, neural action potentials do not reach the endpoint and the rate of endpoint action potentials is the same as the stimulus rate, regardless of the intersite conduction time. However, when the stimulus rate is less than the neural rate, and the intersite conduction time is short, the two rates partially sum. Increases in stimulus rate produce non-monotonic increases in endpoint rate and continuously increasing block of neurally generated action potentials. Rate summation is reduced and more neural action potentials are blocked as the intersite conduction time increases.. At long intersite conduction times, the endpoint rate simplifies to being the maximum of either the neural or the stimulus rate. Significance This study highlights the potential of increasing the endpoint action potential rate and preserving neural information transmission by low rate stimulation with short intersite conduction times. Intersite conduction times can be decreased with proximal stimulation sites for muscles and distal stimulation sites for sensory endings. The model provides a basis for optimizing experiments and designing neuroprosthetic interventions involving motor or sensory stimulation. PMID:25161163

  19. Alteration of neural action potential patterns by axonal stimulation: the importance of stimulus location

    NASA Astrophysics Data System (ADS)

    Crago, Patrick E.; Makowski, Nathaniel S.

    2014-10-01

    Objective. Stimulation of peripheral nerves is often superimposed on ongoing motor and sensory activity in the same axons, without a quantitative model of the net action potential train at the axon endpoint. Approach. We develop a model of action potential patterns elicited by superimposing constant frequency axonal stimulation on the action potentials arriving from a physiologically activated neural source. The model includes interactions due to collision block, resetting of the neural impulse generator, and the refractory period of the axon at the point of stimulation. Main results. Both the mean endpoint firing rate and the probability distribution of the action potential firing periods depend strongly on the relative firing rates of the two sources and the intersite conduction time between them. When the stimulus rate exceeds the neural rate, neural action potentials do not reach the endpoint and the rate of endpoint action potentials is the same as the stimulus rate, regardless of the intersite conduction time. However, when the stimulus rate is less than the neural rate, and the intersite conduction time is short, the two rates partially sum. Increases in stimulus rate produce non-monotonic increases in endpoint rate and continuously increasing block of neurally generated action potentials. Rate summation is reduced and more neural action potentials are blocked as the intersite conduction time increases. At long intersite conduction times, the endpoint rate simplifies to being the maximum of either the neural or the stimulus rate. Significance. This study highlights the potential of increasing the endpoint action potential rate and preserving neural information transmission by low rate stimulation with short intersite conduction times. Intersite conduction times can be decreased with proximal stimulation sites for muscles and distal stimulation sites for sensory endings. The model provides a basis for optimizing experiments and designing neuroprosthetic interventions involving motor or sensory stimulation.

  20. The exit-time problem for a Markov jump process

    NASA Astrophysics Data System (ADS)

    Burch, N.; D'Elia, M.; Lehoucq, R. B.

    2014-12-01

    The purpose of this paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developed nonlocal vector calculus. This calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.

  1. Assessing the condition of bayous and estuaries: Bayou Chico Gulf of Mexico demonstration study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickson, K.; Acevedo, M.; Waller, T.

    1995-12-31

    A demonstration study was conducted in May 1994 on Bayou Chico to assess the utility of various assessment and measurement endpoints in determining the condition of bayous and estuaries. Bayou Chico has water quality problems attributed to its low flushing rate and urban/industrial land use in its watershed. The sampling scheme assessed the within-sampling station and spatial variability of measurement endpoints. Fourteen sampling stations in Bayou Chico and 3 stations in Pensacola Bay were selected based on an intensified EMAP sampling grid. Time and space coordinated sampling was conducted for: sediment contaminants and properties, sediment toxicity, water quality, benthic infauna,more » zooplankton and phytoplankton populations. Fish and crabs were also collected and analyzed for a suite of biomarkers and organic chemical residues. Primary productivity was measured via the light bottle dark bottle oxygen method and via diurnal oxygen measurements made with continuous recording data sondes. Stream sites were evaluated for water and sediment quality, water and sediment toxicity, benthic invertebrates and fish. Watershed analyses included assessment of land use/landcover (via SPOT and TM images), soils, pollution sources (point and non-point) and hydrography. These data were coordinated via an Arc/Info GIS system for display and spatial analysis. 1994 survey data were used to parameterize environmental fate models such as SWMM (Storm Water Management Model), DYNHYD5 (WASP5 hydrodynamics model) and WASP5 (Water Quality Analysis Simulation Program) to make predictions about the dynamics and fate of chemical contaminants in Bayou Chico. This paper will present an overview, and report on the results in regards to within-site and spatial variability in Bayou Chico. Conclusions on the efficacy of the assessment and measurement endpoints in evaluating the condition (health) of Bayou Chico will be presented.« less

  2. Net Surface Flux Budget Over Tropical Oceans Estimated from the Tropical Rainfall Measuring Mission (TRMM)

    NASA Astrophysics Data System (ADS)

    Fan, Tai-Fang

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  3. Magneto - Optical Imaging of Superconducting MgB2 Thin Films

    NASA Astrophysics Data System (ADS)

    Hummert, Stephanie Maria

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  4. Boron Carbide Filled Neutron Shielding Textile Polymers

    NASA Astrophysics Data System (ADS)

    Manzlak, Derrick Anthony

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  5. Parallel Unstructured Grid Generation for Complex Real-World Aerodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Zagaris, George

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  6. Polymeric Radiation Shielding for Applications in Space: Polyimide Synthesis and Modeling of Multi-Layered Polymeric Shields

    NASA Astrophysics Data System (ADS)

    Schiavone, Clinton Cleveland

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  7. Processing and Conversion of Algae to Bioethanol

    NASA Astrophysics Data System (ADS)

    Kampfe, Sara Katherine

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  8. The Development of the CALIPSO LiDAR Simulator

    NASA Astrophysics Data System (ADS)

    Powell, Kathleen A.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  9. Exploring a Novel Approach to Technical Nuclear Forensics Utilizing Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Peeke, Richard Scot

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  10. Modeling of Critically-Stratified Gravity Flows: Application to the Eel River Continental Shelf, Northern California

    NASA Astrophysics Data System (ADS)

    Scully, Malcolm E.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  11. Production of Cyclohexylene-Containing Diamines in Pursuit of Novel Radiation Shielding Materials

    NASA Astrophysics Data System (ADS)

    Bate, Norah G.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  12. Development of Boron-Containing Polyimide Materials and Poly(arylene Ether)s for Radiation Shielding

    NASA Astrophysics Data System (ADS)

    Collins, Brittani May

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  13. Magnetization Dynamics and Anisotropy in Ferromagnetic/Antiferromagnetic Ni/NiO Bilayers

    NASA Astrophysics Data System (ADS)

    Petersen, Andreas

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  14. A simplified parsimonious higher order multivariate Markov chain model with new convergence condition

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.

  15. Smart Annotation of Cyclic Data Using Hierarchical Hidden Markov Models.

    PubMed

    Martindale, Christine F; Hoenig, Florian; Strohrmann, Christina; Eskofier, Bjoern M

    2017-10-13

    Cyclic signals are an intrinsic part of daily life, such as human motion and heart activity. The detailed analysis of them is important for clinical applications such as pathological gait analysis and for sports applications such as performance analysis. Labeled training data for algorithms that analyze these cyclic data come at a high annotation cost due to only limited annotations available under laboratory conditions or requiring manual segmentation of the data under less restricted conditions. This paper presents a smart annotation method that reduces this cost of labeling for sensor-based data, which is applicable to data collected outside of strict laboratory conditions. The method uses semi-supervised learning of sections of cyclic data with a known cycle number. A hierarchical hidden Markov model (hHMM) is used, achieving a mean absolute error of 0.041 ± 0.020 s relative to a manually-annotated reference. The resulting model was also used to simultaneously segment and classify continuous, 'in the wild' data, demonstrating the applicability of using hHMM, trained on limited data sections, to label a complete dataset. This technique achieved comparable results to its fully-supervised equivalent. Our semi-supervised method has the significant advantage of reduced annotation cost. Furthermore, it reduces the opportunity for human error in the labeling process normally required for training of segmentation algorithms. It also lowers the annotation cost of training a model capable of continuous monitoring of cycle characteristics such as those employed to analyze the progress of movement disorders or analysis of running technique.

  16. Pyvolve: A Flexible Python Module for Simulating Sequences along Phylogenies.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2015-01-01

    We introduce Pyvolve, a flexible Python module for simulating genetic data along a phylogeny using continuous-time Markov models of sequence evolution. Easily incorporated into Python bioinformatics pipelines, Pyvolve can simulate sequences according to most standard models of nucleotide, amino-acid, and codon sequence evolution. All model parameters are fully customizable. Users can additionally specify custom evolutionary models, with custom rate matrices and/or states to evolve. This flexibility makes Pyvolve a convenient framework not only for simulating sequences under a wide variety of conditions, but also for developing and testing new evolutionary models. Pyvolve is an open-source project under a FreeBSD license, and it is available for download, along with a detailed user-manual and example scripts, from http://github.com/sjspielman/pyvolve.

  17. A coupled hidden Markov model for disease interactions

    PubMed Central

    Sherlock, Chris; Xifara, Tatiana; Telfer, Sandra; Begon, Mike

    2013-01-01

    To investigate interactions between parasite species in a host, a population of field voles was studied longitudinally, with presence or absence of six different parasites measured repeatedly. Although trapping sessions were regular, a different set of voles was caught at each session, leading to incomplete profiles for all subjects. We use a discrete time hidden Markov model for each disease with transition probabilities dependent on covariates via a set of logistic regressions. For each disease the hidden states for each of the other diseases at a given time point form part of the covariate set for the Markov transition probabilities from that time point. This allows us to gauge the influence of each parasite species on the transition probabilities for each of the other parasite species. Inference is performed via a Gibbs sampler, which cycles through each of the diseases, first using an adaptive Metropolis–Hastings step to sample from the conditional posterior of the covariate parameters for that particular disease given the hidden states for all other diseases and then sampling from the hidden states for that disease given the parameters. We find evidence for interactions between several pairs of parasites and of an acquired immune response for two of the parasites. PMID:24223436

  18. Stochastic thermodynamics across scales: Emergent inter-attractoral discrete Markov jump process and its underlying continuous diffusion

    NASA Astrophysics Data System (ADS)

    Santillán, Moisés; Qian, Hong

    2013-01-01

    We investigate the internal consistency of a recently developed mathematical thermodynamic structure across scales, between a continuous stochastic nonlinear dynamical system, i.e., a diffusion process with Langevin and Fokker-Planck equations, and its emergent discrete, inter-attractoral Markov jump process. We analyze how the system’s thermodynamic state functions, e.g. free energy F, entropy S, entropy production ep, free energy dissipation Ḟ, etc., are related when the continuous system is described with coarse-grained discrete variables. It is shown that the thermodynamics derived from the underlying, detailed continuous dynamics gives rise to exactly the free-energy representation of Gibbs and Helmholtz. That is, the system’s thermodynamic structure is the same as if one only takes a middle road and starts with the natural discrete description, with the corresponding transition rates empirically determined. By natural we mean in the thermodynamic limit of a large system, with an inherent separation of time scales between inter- and intra-attractoral dynamics. This result generalizes a fundamental idea from chemistry, and the theory of Kramers, by incorporating thermodynamics: while a mechanical description of a molecule is in terms of continuous bond lengths and angles, chemical reactions are phenomenologically described by a discrete representation, in terms of exponential rate laws and a stochastic thermodynamics.

  19. The exit-time problem for a Markov jump process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burch, N.; D'Elia, Marta; Lehoucq, Richard B.

    2014-12-15

    The purpose of our paper is to consider the exit-time problem for a finite-range Markov jump process, i.e, the distance the particle can jump is bounded independent of its location. Such jump diffusions are expedient models for anomalous transport exhibiting super-diffusion or nonstandard normal diffusion. We refer to the associated deterministic equation as a volume-constrained nonlocal diffusion equation. The volume constraint is the nonlocal analogue of a boundary condition necessary to demonstrate that the nonlocal diffusion equation is well-posed and is consistent with the jump process. A critical aspect of the analysis is a variational formulation and a recently developedmore » nonlocal vector calculus. Furthermore, this calculus allows us to pose nonlocal backward and forward Kolmogorov equations, the former equation granting the various moments of the exit-time distribution.« less

  20. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1995-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  1. Hidden Markov models for fault detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J. (Inventor)

    1993-01-01

    The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.

  2. Stochastic models for the Trojan Y-Chromosome eradication strategy of an invasive species.

    PubMed

    Wang, Xueying; Walton, Jay R; Parshad, Rana D

    2016-01-01

    The Trojan Y-Chromosome (TYC) strategy, an autocidal genetic biocontrol method, has been proposed to eliminate invasive alien species. In this work, we develop a Markov jump process model for this strategy, and we verify that there is a positive probability for wild-type females going extinct within a finite time. Moreover, when sex-reversed Trojan females are introduced at a constant population size, we formulate a stochastic differential equation (SDE) model as an approximation to the proposed Markov jump process model. Using the SDE model, we investigate the probability distribution and expectation of the extinction time of wild-type females by solving Kolmogorov equations associated with these statistics. The results indicate how the probability distribution and expectation of the extinction time are shaped by the initial conditions and the model parameters.

  3. A new approach for biological online testing of stack gas condensate from municipal waste incinerators.

    PubMed

    Elsner, Dorothea; Fomin, Anette

    2002-01-01

    A biological testing system for the monitoring of stack gas condensates of municipal waste incinerators has been developed using Euglena gracilis as a test organism. The motility, velocity and cellular form of the organisms were the endpoints, calculated by an image analysis system. All endpoints showed statistically significant changes in a short time when organisms were exposed to samples collected during combustion situations with increased pollutant concentrations. The velocity of the organisms proved to be the most appropriate endpoint. A semi-continuous system with E. gracilis for monitoring stack gas condensate is proposed, which could result in an online system for testing stack gas condensates in the future.

  4. On Markov parameters in system identification

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Longman, Richard W.

    1991-01-01

    A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.

  5. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    PubMed

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  6. Hidden Markov model tracking of continuous gravitational waves from young supernova remnants

    NASA Astrophysics Data System (ADS)

    Sun, L.; Melatos, A.; Suvorova, S.; Moran, W.; Evans, R. J.

    2018-02-01

    Searches for persistent gravitational radiation from nonpulsating neutron stars in young supernova remnants are computationally challenging because of rapid stellar braking. We describe a practical, efficient, semicoherent search based on a hidden Markov model tracking scheme, solved by the Viterbi algorithm, combined with a maximum likelihood matched filter, the F statistic. The scheme is well suited to analyzing data from advanced detectors like the Advanced Laser Interferometer Gravitational Wave Observatory (Advanced LIGO). It can track rapid phase evolution from secular stellar braking and stochastic timing noise torques simultaneously without searching second- and higher-order derivatives of the signal frequency, providing an economical alternative to stack-slide-based semicoherent algorithms. One implementation tracks the signal frequency alone. A second implementation tracks the signal frequency and its first time derivative. It improves the sensitivity by a factor of a few upon the first implementation, but the cost increases by 2 to 3 orders of magnitude.

  7. The algebra of the general Markov model on phylogenetic trees and networks.

    PubMed

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  8. Effective degree Markov-chain approach for discrete-time epidemic processes on uncorrelated networks.

    PubMed

    Cai, Chao-Ran; Wu, Zhi-Xi; Guan, Jian-Yue

    2014-11-01

    Recently, Gómez et al. proposed a microscopic Markov-chain approach (MMCA) [S. Gómez, J. Gómez-Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. E 84, 036105 (2011)PLEEE81539-375510.1103/PhysRevE.84.036105] to the discrete-time susceptible-infected-susceptible (SIS) epidemic process and found that the epidemic prevalence obtained by this approach agrees well with that by simulations. However, we found that the approach cannot be straightforwardly extended to a susceptible-infected-recovered (SIR) epidemic process (due to its irreversible property), and the epidemic prevalences obtained by MMCA and Monte Carlo simulations do not match well when the infection probability is just slightly above the epidemic threshold. In this contribution we extend the effective degree Markov-chain approach, proposed for analyzing continuous-time epidemic processes [J. Lindquist, J. Ma, P. Driessche, and F. Willeboordse, J. Math. Biol. 62, 143 (2011)JMBLAJ0303-681210.1007/s00285-010-0331-2], to address discrete-time binary-state (SIS) or three-state (SIR) epidemic processes on uncorrelated complex networks. It is shown that the final epidemic size as well as the time series of infected individuals obtained from this approach agree very well with those by Monte Carlo simulations. Our results are robust to the change of different parameters, including the total population size, the infection probability, the recovery probability, the average degree, and the degree distribution of the underlying networks.

  9. Intelligent data analysis to model and understand live cell time-lapse sequences.

    PubMed

    Paterson, Allan; Ashtari, M; Ribé, D; Stenbeck, G; Tucker, A

    2012-01-01

    One important aspect of cellular function, which is at the basis of tissue homeostasis, is the delivery of proteins to their correct destinations. Significant advances in live cell microscopy have allowed tracking of these pathways by following the dynamics of fluorescently labelled proteins in living cells. This paper explores intelligent data analysis techniques to model the dynamic behavior of proteins in living cells as well as to classify different experimental conditions. We use a combination of decision tree classification and hidden Markov models. In particular, we introduce a novel approach to "align" hidden Markov models so that hidden states from different models can be cross-compared. Our models capture the dynamics of two experimental conditions accurately with a stable hidden state for control data and multiple (less stable) states for the experimental data recapitulating the behaviour of particle trajectories within live cell time-lapse data. In addition to having successfully developed an automated framework for the classification of protein transport dynamics from live cell time-lapse data our model allows us to understand the dynamics of a complex trafficking pathway in living cells in culture.

  10. Discrete Latent Markov Models for Normally Distributed Response Data

    ERIC Educational Resources Information Center

    Schmittmann, Verena D.; Dolan, Conor V.; van der Maas, Han L. J.; Neale, Michael C.

    2005-01-01

    Van de Pol and Langeheine (1990) presented a general framework for Markov modeling of repeatedly measured discrete data. We discuss analogical single indicator models for normally distributed responses. In contrast to discrete models, which have been studied extensively, analogical continuous response models have hardly been considered. These…

  11. Approving cancer treatments based on endpoints other than overall survival: an analysis of historical data using the PACE Continuous Innovation Indicators™ (CII).

    PubMed

    Brooks, Neon; Campone, Mario; Paddock, Silvia; Shortenhaus, Scott; Grainger, David; Zummo, Jacqueline; Thomas, Samuel; Li, Rose

    2017-01-01

    There is an active debate about the role that endpoints other than overall survival (OS) should play in the drug approval process. Yet the term 'surrogate endpoint' implies that OS is the only critical metric for regulatory approval of cancer treatments. We systematically analyzed the relationship between U.S. Food and Drug Administration (FDA) approval and publication of OS evidence to understand better the risks and benefits of delaying approval until OS evidence is available. Using the PACE Continuous Innovation Indicators (CII) platform, we analyzed the effects of cancer type, treatment goal, and year of approval on the lag time between FDA approval and publication of first significant OS finding for 53 treatments approved between 1952 and 2016 for 10 cancer types (n = 71 approved indications). Greater than 59% of treatments were approved before significant OS data for the approved indication were published. Of the drugs in the sample, 31% had lags between approval and first published OS evidence of 4 years or longer. The average number of years between approval and first OS evidence varied by cancer type and did not reliably predict the eventual amount of OS evidence accumulated. Striking the right balance between early access and minimizing risk is a central challenge for regulators worldwide. We illustrate that endpoints other than OS have long helped to provide timely access to new medicines, including many current standards of care. We found that many critical drugs are approved many years before OS data are published, and that OS may not be the most appropriate endpoint in some treatment contexts. Our examination of approved treatments without significant OS data suggests contexts where OS may not be the most relevant endpoint and highlights the importance of using a wide variety of fit-for-purpose evidence types in the approval process.

  12. Policy Transfer via Markov Logic Networks

    NASA Astrophysics Data System (ADS)

    Torrey, Lisa; Shavlik, Jude

    We propose using a statistical-relational model, the Markov Logic Network, for knowledge transfer in reinforcement learning. Our goal is to extract relational knowledge from a source task and use it to speed up learning in a related target task. We show that Markov Logic Networks are effective models for capturing both source-task Q-functions and source-task policies. We apply them via demonstration, which involves using them for decision making in an initial stage of the target task before continuing to learn. Through experiments in the RoboCup simulated-soccer domain, we show that transfer via Markov Logic Networks can significantly improve early performance in complex tasks, and that transferring policies is more effective than transferring Q-functions.

  13. Identifying and correcting non-Markov states in peptide conformational dynamics

    NASA Astrophysics Data System (ADS)

    Nerukh, Dmitry; Jensen, Christian H.; Glen, Robert C.

    2010-02-01

    Conformational transitions in proteins define their biological activity and can be investigated in detail using the Markov state model. The fundamental assumption on the transitions between the states, their Markov property, is critical in this framework. We test this assumption by analyzing the transitions obtained directly from the dynamics of a molecular dynamics simulated peptide valine-proline-alanine-leucine and states defined phenomenologically using clustering in dihedral space. We find that the transitions are Markovian at the time scale of ≈50 ps and longer. However, at the time scale of 30-40 ps the dynamics loses its Markov property. Our methodology reveals the mechanism that leads to non-Markov behavior. It also provides a way of regrouping the conformations into new states that now possess the required Markov property of their dynamics.

  14. The spectral method and the central limit theorem for general Markov chains

    NASA Astrophysics Data System (ADS)

    Nagaev, S. V.

    2017-12-01

    We consider Markov chains with an arbitrary phase space and develop a modification of the spectral method that enables us to prove the central limit theorem (CLT) for non-uniformly ergodic Markov chains. The conditions imposed on the transition function are more general than those by Athreya-Ney and Nummelin. Our proof of the CLT is purely analytical.

  15. SMERFS: Stochastic Markov Evaluation of Random Fields on the Sphere

    NASA Astrophysics Data System (ADS)

    Creasey, Peter; Lang, Annika

    2018-04-01

    SMERFS (Stochastic Markov Evaluation of Random Fields on the Sphere) creates large realizations of random fields on the sphere. It uses a fast algorithm based on Markov properties and fast Fourier Transforms in 1d that generates samples on an n X n grid in O(n2 log n) and efficiently derives the necessary conditional covariance matrices.

  16. Evaluation of spin freezing versus conventional freezing as part of a continuous pharmaceutical freeze-drying concept for unit doses.

    PubMed

    De Meyer, L; Van Bockstal, P-J; Corver, J; Vervaet, C; Remon, J P; De Beer, T

    2015-12-30

    Spin-freezing as alternative freezing approach was evaluated as part of an innovative continuous pharmaceutical freeze-drying concept for unit doses. The aim of this paper was to compare the sublimation rate of spin-frozen vials versus traditionally frozen vials in a batch freeze-dryer, and its impact on total drying time. Five different formulations, each having a different dry cake resistance, were tested. After freezing, the traditionally frozen vials were placed on the shelves while the spin-frozen vials were placed in aluminum vial holders providing radial energy supply during drying. Different primary drying conditions and chamber pressures were evaluated. After 2h of primary drying, the amount of sublimed ice was determined in each vial. Each formulation was monitored in-line using NIR spectroscopy during drying to determine the sublimation endpoint and the influence of drying conditions upon total drying time. For all tested formulations and applied freeze-drying conditions, there was a significant higher sublimation rate in the spin-frozen vials. This can be explained by the larger product surface and the lower importance of product resistance because of the much thinner product layers in the spin frozen vials. The in-line NIR measurements allowed evaluating the influence of applied drying conditions on the drying trajectories. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Time-Course Determination of Cellular Stress Responses Elicited by Engineered Nanomaterials

    EPA Science Inventory

    Engineered nanomaterials are being incorporated continuously into consumer products, resulting in increased human exposures. The study of engineered nanomaterials has focused largely on oxidative stress and inflammation endpoints without further investigating potential pathways. ...

  18. Estimation for general birth-death processes

    PubMed Central

    Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.

    2013-01-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261

  19. Estimation for general birth-death processes.

    PubMed

    Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2014-04-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.

  20. Utilization of two web-based continuing education courses evaluated by Markov chain model.

    PubMed

    Tian, Hao; Lin, Jin-Mann S; Reeves, William C

    2012-01-01

    To evaluate the web structure of two web-based continuing education courses, identify problems and assess the effects of web site modifications. Markov chain models were built from 2008 web usage data to evaluate the courses' web structure and navigation patterns. The web site was then modified to resolve identified design issues and the improvement in user activity over the subsequent 12 months was quantitatively evaluated. Web navigation paths were collected between 2008 and 2010. The probability of navigating from one web page to another was analyzed. The continuing education courses' sequential structure design was clearly reflected in the resulting actual web usage models, and none of the skip transitions provided was heavily used. The web navigation patterns of the two different continuing education courses were similar. Two possible design flaws were identified and fixed in only one of the two courses. Over the following 12 months, the drop-out rate in the modified course significantly decreased from 41% to 35%, but remained unchanged in the unmodified course. The web improvement effects were further verified via a second-order Markov chain model. The results imply that differences in web content have less impact than web structure design on how learners navigate through continuing education courses. Evaluation of user navigation can help identify web design flaws and guide modifications. This study showed that Markov chain models provide a valuable tool to evaluate web-based education courses. Both the results and techniques in this study would be very useful for public health education and research specialists.

  1. Utilization of two web-based continuing education courses evaluated by Markov chain model

    PubMed Central

    Lin, Jin-Mann S; Reeves, William C

    2011-01-01

    Objectives To evaluate the web structure of two web-based continuing education courses, identify problems and assess the effects of web site modifications. Design Markov chain models were built from 2008 web usage data to evaluate the courses' web structure and navigation patterns. The web site was then modified to resolve identified design issues and the improvement in user activity over the subsequent 12 months was quantitatively evaluated. Measurements Web navigation paths were collected between 2008 and 2010. The probability of navigating from one web page to another was analyzed. Results The continuing education courses' sequential structure design was clearly reflected in the resulting actual web usage models, and none of the skip transitions provided was heavily used. The web navigation patterns of the two different continuing education courses were similar. Two possible design flaws were identified and fixed in only one of the two courses. Over the following 12 months, the drop-out rate in the modified course significantly decreased from 41% to 35%, but remained unchanged in the unmodified course. The web improvement effects were further verified via a second-order Markov chain model. Conclusions The results imply that differences in web content have less impact than web structure design on how learners navigate through continuing education courses. Evaluation of user navigation can help identify web design flaws and guide modifications. This study showed that Markov chain models provide a valuable tool to evaluate web-based education courses. Both the results and techniques in this study would be very useful for public health education and research specialists. PMID:21976027

  2. Incorporating teleconnection information into reservoir operating policies using Stochastic Dynamic Programming and a Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Turner, Sean; Galelli, Stefano; Wilcox, Karen

    2015-04-01

    Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating to both the El Niño Southern Oscillation and the Indian Ocean Dipole influence local hydro-meteorological processes; statistically significant lag correlations have already been established. Simulation of the derived operating policies, which are benchmarked against standard policies conditioned on the reservoir storage and the antecedent inflow, demonstrates the potential of the proposed approach. Future research will further develop the model for sensitivity analysis and regional studies examining the economic value of incorporating long range forecasts into reservoir operation.

  3. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    PubMed

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  4. Developing a statistically powerful measure for quartet tree inference using phylogenetic identities and Markov invariants.

    PubMed

    Sumner, Jeremy G; Taylor, Amelia; Holland, Barbara R; Jarvis, Peter D

    2017-12-01

    Recently there has been renewed interest in phylogenetic inference methods based on phylogenetic invariants, alongside the related Markov invariants. Broadly speaking, both these approaches give rise to polynomial functions of sequence site patterns that, in expectation value, either vanish for particular evolutionary trees (in the case of phylogenetic invariants) or have well understood transformation properties (in the case of Markov invariants). While both approaches have been valued for their intrinsic mathematical interest, it is not clear how they relate to each other, and to what extent they can be used as practical tools for inference of phylogenetic trees. In this paper, by focusing on the special case of binary sequence data and quartets of taxa, we are able to view these two different polynomial-based approaches within a common framework. To motivate the discussion, we present three desirable statistical properties that we argue any invariant-based phylogenetic method should satisfy: (1) sensible behaviour under reordering of input sequences; (2) stability as the taxa evolve independently according to a Markov process; and (3) explicit dependence on the assumption of a continuous-time process. Motivated by these statistical properties, we develop and explore several new phylogenetic inference methods. In particular, we develop a statistically bias-corrected version of the Markov invariants approach which satisfies all three properties. We also extend previous work by showing that the phylogenetic invariants can be implemented in such a way as to satisfy property (3). A simulation study shows that, in comparison to other methods, our new proposed approach based on bias-corrected Markov invariants is extremely powerful for phylogenetic inference. The binary case is of particular theoretical interest as-in this case only-the Markov invariants can be expressed as linear combinations of the phylogenetic invariants. A wider implication of this is that, for models with more than two states-for example DNA sequence alignments with four-state models-we find that methods which rely on phylogenetic invariants are incapable of satisfying all three of the stated statistical properties. This is because in these cases the relevant Markov invariants belong to a class of polynomials independent from the phylogenetic invariants.

  5. High-Performance Nanocomposites Designed for Radiation Shielding in Space and an Application of GIS for Analyzing Nanopowder Dispersion in Polymer Matrixes

    NASA Astrophysics Data System (ADS)

    Auslander, Joseph Simcha

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  6. Use of Remote Sensing to Identify Essential Habitat for Aeschynomene virginica (L.) BSP, a Threatened Tidal Freshwater Wetland Plant

    NASA Astrophysics Data System (ADS)

    Mountz, Elizabeth M.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  7. Silver-Polyimide Nanocomposite Films: Single-Stage Synthesis and Analysis of Metalized Partially-Fluorinated Polyimide BTDA/4-BDAF Prepared from Silver(I) Complexes

    NASA Astrophysics Data System (ADS)

    Abelard, Joshua Erold Robert

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  8. Multifunctional Polymer Synthesis and Incorporation of Gadolinium Compounds and Modified Tungsten Nanoparticles for Improvement of Radiation Shielding for use in Outer Space

    NASA Astrophysics Data System (ADS)

    Harbert, Emily Grace

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  9. Cumulative Reports and Publications through December 31, 1992

    DTIC Science & Technology

    1993-04-01

    periodls of time , and by consultants. Members of NASA’s research staff also may be residenit at l( ASE for limited pieriodls. The major categories of the...To appear in Theoretical and Computational Fluid Dynamics. Nicol, David M.: Optimistic Barricr Synchronization . I(CASE Report No. 92-34, July 27, 1992...continuous time Markov chains. ICASE Report No. 92-60, November 18, 1992, 23 pages. Submitted to the 7th Annual Workshop on Parallel and Distributed

  10. Le modèle stochastique SIS pour une épidémie dans un environnement aléatoire.

    PubMed

    Bacaër, Nicolas

    2016-10-01

    The stochastic SIS epidemic model in a random environment. In a random environment that is a two-state continuous-time Markov chain, the mean time to extinction of the stochastic SIS epidemic model grows in the supercritical case exponentially with respect to the population size if the two states are favorable, and like a power law if one state is favorable while the other is unfavorable.

  11. Stability of the Markov operator and synchronization of Markovian random products

    NASA Astrophysics Data System (ADS)

    Díaz, Lorenzo J.; Matias, Edgar

    2018-05-01

    We study Markovian random products on a large class of ‘m-dimensional’ connected compact metric spaces (including products of closed intervals and trees). We introduce a splitting condition, generalizing the classical one by Dubins and Freedman, and prove that this condition implies the asymptotic stability of the corresponding Markov operator and (exponentially fast) synchronization.

  12. Eternal non-Markovianity: from random unitary to Markov chain realisations.

    PubMed

    Megier, Nina; Chruściński, Dariusz; Piilo, Jyrki; Strunz, Walter T

    2017-07-25

    The theoretical description of quantum dynamics in an intriguing way does not necessarily imply the underlying dynamics is indeed intriguing. Here we show how a known very interesting master equation with an always negative decay rate [eternal non-Markovianity (ENM)] arises from simple stochastic Schrödinger dynamics (random unitary dynamics). Equivalently, it may be seen as arising from a mixture of Markov (semi-group) open system dynamics. Both these approaches lead to a more general family of CPT maps, characterized by a point within a parameter triangle. Our results show how ENM quantum dynamics can be realised easily in the laboratory. Moreover, we find a quantum time-continuously measured (quantum trajectory) realisation of the dynamics of the ENM master equation based on unitary transformations and projective measurements in an extended Hilbert space, guided by a classical Markov process. Furthermore, a Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) representation of the dynamics in an extended Hilbert space can be found, with a remarkable property: there is no dynamics in the ancilla state. Finally, analogous constructions for two qubits extend these results from non-CP-divisible to non-P-divisible dynamics.

  13. The application of Markov decision process in restaurant delivery robot

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Hu, Zhen; Wang, Ying

    2017-05-01

    As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.

  14. Markov stochasticity coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  15. Stochastic Model of Seasonal Runoff Forecasts

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, Roman; Watada, Leslie M.

    1986-03-01

    Each year the National Weather Service and the Soil Conservation Service issue a monthly sequence of five (or six) categorical forecasts of the seasonal snowmelt runoff volume. To describe uncertainties in these forecasts for the purposes of optimal decision making, a stochastic model is formulated. It is a discrete-time, finite, continuous-space, nonstationary Markov process. Posterior densities of the actual runoff conditional upon a forecast, and transition densities of forecasts are obtained from a Bayesian information processor. Parametric densities are derived for the process with a normal prior density of the runoff and a linear model of the forecast error. The structure of the model and the estimation procedure are motivated by analyses of forecast records from five stations in the Snake River basin, from the period 1971-1983. The advantages of supplementing the current forecasting scheme with a Bayesian analysis are discussed.

  16. Spatial estimation from remotely sensed data via empirical Bayes models

    NASA Technical Reports Server (NTRS)

    Hill, J. R.; Hinkley, D. V.; Kostal, H.; Morris, C. N.

    1984-01-01

    Multichannel satellite image data, available as LANDSAT imagery, are recorded as a multivariate time series (four channels, multiple passovers) in two spatial dimensions. The application of parametric empirical Bayes theory to classification of, and estimating the probability of, each crop type at each of a large number of pixels is considered. This theory involves both the probability distribution of imagery data, conditional on crop types, and the prior spatial distribution of crop types. For the latter Markov models indexed by estimable parameters are used. A broad outline of the general theory reveals several questions for further research. Some detailed results are given for the special case of two crop types when only a line transect is analyzed. Finally, the estimation of an underlying continuous process on the lattice is discussed which would be applicable to such quantities as crop yield.

  17. Markov chains and semi-Markov models in time-to-event analysis.

    PubMed

    Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J

    2013-10-25

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.

  18. Markov chains and semi-Markov models in time-to-event analysis

    PubMed Central

    Abner, Erin L.; Charnigo, Richard J.; Kryscio, Richard J.

    2014-01-01

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields. PMID:24818062

  19. Identification of market trends with string and D2-brane maps

    NASA Astrophysics Data System (ADS)

    Bartoš, Erik; Pinčák, Richard

    2017-08-01

    The multidimensional string objects are introduced as a new alternative for an application of string models for time series forecasting in trading on financial markets. The objects are represented by open string with 2-endpoints and D2-brane, which are continuous enhancement of 1-endpoint open string model. We show how new object properties can change the statistics of the predictors, which makes them the candidates for modeling a wide range of time series systems. String angular momentum is proposed as another tool to analyze the stability of currency rates except the historical volatility. To show the reliability of our approach with application of string models for time series forecasting we present the results of real demo simulations for four currency exchange pairs.

  20. Quantitative risk stratification in Markov chains with limiting conditional distributions.

    PubMed

    Chan, David C; Pollett, Philip K; Weinstein, Milton C

    2009-01-01

    Many clinical decisions require patient risk stratification. The authors introduce the concept of limiting conditional distributions, which describe the equilibrium proportion of surviving patients occupying each disease state in a Markov chain with death. Such distributions can quantitatively describe risk stratification. The authors first establish conditions for the existence of a positive limiting conditional distribution in a general Markov chain and describe a framework for risk stratification using the limiting conditional distribution. They then apply their framework to a clinical example of a treatment indicated for high-risk patients, first to infer the risk of patients selected for treatment in clinical trials and then to predict the outcomes of expanding treatment to other populations of risk. For the general chain, a positive limiting conditional distribution exists only if patients in the earliest state have the lowest combined risk of progression or death. The authors show that in their general framework, outcomes and population risk are interchangeable. For the clinical example, they estimate that previous clinical trials have selected the upper quintile of patient risk for this treatment, but they also show that expanded treatment would weakly dominate this degree of targeted treatment, and universal treatment may be cost-effective. Limiting conditional distributions exist in most Markov models of progressive diseases and are well suited to represent risk stratification quantitatively. This framework can characterize patient risk in clinical trials and predict outcomes for other populations of risk.

  1. Open Quantum Systems and Classical Trajectories

    NASA Astrophysics Data System (ADS)

    Rebolledo, Rolando

    2004-09-01

    A Quantum Markov Semigroup consists of a family { T} = ({ T}t)_{t ∈ B R+} of normal ω*- continuous completely positive maps on a von Neumann algebra 𝔐 which preserve the unit and satisfy the semigroup property. This class of semigroups has been extensively used to represent open quantum systems. This article is aimed at studying the existence of a { T} -invariant abelian subalgebra 𝔄 of 𝔐. When this happens, the restriction of { T}t to 𝔄 defines a classical Markov semigroup T = (Tt)t ∈ ∝ + say, associated to a classical Markov process X = (Xt)t ∈ ∝ +. The structure (𝔄, T, X) unravels the quantum Markov semigroup { T} , providing a bridge between open quantum systems and classical stochastic processes.

  2. Operational Markov Condition for Quantum Processes

    NASA Astrophysics Data System (ADS)

    Pollock, Felix A.; Rodríguez-Rosario, César; Frauenheim, Thomas; Paternostro, Mauro; Modi, Kavan

    2018-01-01

    We derive a necessary and sufficient condition for a quantum process to be Markovian which coincides with the classical one in the relevant limit. Our condition unifies all previously known definitions for quantum Markov processes by accounting for all potentially detectable memory effects. We then derive a family of measures of non-Markovianity with clear operational interpretations, such as the size of the memory required to simulate a process or the experimental falsifiability of a Markovian hypothesis.

  3. Sampling rare fluctuations of discrete-time Markov chains

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  4. Sampling rare fluctuations of discrete-time Markov chains.

    PubMed

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  5. Positive contraction mappings for classical and quantum Schrödinger systems

    NASA Astrophysics Data System (ADS)

    Georgiou, Tryphon T.; Pavon, Michele

    2015-03-01

    The classical Schrödinger bridge seeks the most likely probability law for a diffusion process, in path space, that matches marginals at two end points in time; the likelihood is quantified by the relative entropy between the sought law and a prior. Jamison proved that the new law is obtained through a multiplicative functional transformation of the prior. This transformation is characterised by an automorphism on the space of endpoints probability measures, which has been studied by Fortet, Beurling, and others. A similar question can be raised for processes evolving in a discrete time and space as well as for processes defined over non-commutative probability spaces. The present paper builds on earlier work by Pavon and Ticozzi and begins by establishing solutions to Schrödinger systems for Markov chains. Our approach is based on the Hilbert metric and shows that the solution to the Schrödinger bridge is provided by the fixed point of a contractive map. We approach, in a similar manner, the steering of a quantum system across a quantum channel. We are able to establish existence of quantum transitions that are multiplicative functional transformations of a given Kraus map for the cases where the marginals are either uniform or pure states. As in the Markov chain case, and for uniform density matrices, the solution of the quantum bridge can be constructed from the fixed point of a certain contractive map. For arbitrary marginal densities, extensive numerical simulations indicate that iteration of a similar map leads to fixed points from which we can construct a quantum bridge. For this general case, however, a proof of convergence remains elusive.

  6. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    NASA Astrophysics Data System (ADS)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  7. Conditional rate derivation in the presence of intervening variables using a Markov chain.

    PubMed

    Shachtman, R H; Schoenfelder, J R; Hogue, C J

    1982-01-01

    When conducting inferential and epidemiologic studies, researchers are often interested in the distribution of time until the occurrence of some specified event, a form of incidence calculation. Furthermore, this interest often extends to the effects of intervening factors on this distribution. In this paper we impose the assumption that the phenomena being investigated are governed by a stationary Markov chain and review how one may estimate the above distribution. We then introduce and relate two different methods of investigating the effects of intervening factors. In particular, we show how an investigator may evaluate the effect of potential intervention programs. Finally, we demonstrate the proposed methodology using data from a population study.

  8. A multifractal analysis of equilibrium measures for conformal expanding maps and Moran-like geometric constructions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pesin, Y.; Weiss, H.

    1997-01-01

    In this paper we establish the complete multifractal formalism for equilibrium measures for Holder continuous conformal expanding maps and expanding Markov Moran-like geometric constructions. Examples include Markov maps of an interval, beta transformations of an interval, rational maps with hyperbolic Julia sets, and conformal total endomorphisms. We also construct a Holder continuous homeomorphism of a compact metric space with an ergodic invariant measure of positive entropy for which the dimension spectrum is not convex, and hence the multifractal formalism fails.

  9. Dynamic Programming for Structured Continuous Markov Decision Problems

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu

    2004-01-01

    We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.

  10. Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain

    PubMed Central

    Dai, Yonghui; Han, Dongmei; Dai, Weihui

    2014-01-01

    The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market. PMID:24782659

  11. Linear system identification via backward-time observer models

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh Q.

    1992-01-01

    Presented here is an algorithm to compute the Markov parameters of a backward-time observer for a backward-time model from experimental input and output data. The backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) for the backward-time system identification. The identified backward-time system Markov parameters are used in the Eigensystem Realization Algorithm to identify a backward-time state-space model, which can be easily converted to the usual forward-time representation. If one reverses time in the model to be identified, what were damped true system modes become modes with negative damping, growing as the reversed time increases. On the other hand, the noise modes in the identification still maintain the property that they are stable. The shift from positive damping to negative damping of the true system modes allows one to distinguish these modes from noise modes. Experimental results are given to illustrate when and to what extent this concept works.

  12. A class of generalized Ginzburg-Landau equations with random switching

    NASA Astrophysics Data System (ADS)

    Wu, Zheng; Yin, George; Lei, Dongxia

    2018-09-01

    This paper focuses on a class of generalized Ginzburg-Landau equations with random switching. In our formulation, the nonlinear term is allowed to have higher polynomial growth rate than the usual cubic polynomials. The random switching is modeled by a continuous-time Markov chain with a finite state space. First, an explicit solution is obtained. Then properties such as stochastic-ultimate boundedness and permanence of the solution processes are investigated. Finally, two-time-scale models are examined leading to a reduction of complexity.

  13. Long-Term Follow-up to a Randomized Controlled Trial Comparing Peroneal Nerve Functional Electrical Stimulation to an Ankle Foot Orthosis for Patients With Chronic Stroke.

    PubMed

    Bethoux, Francois; Rogers, Helen L; Nolan, Karen J; Abrams, Gary M; Annaswamy, Thiru; Brandstater, Murray; Browne, Barbara; Burnfield, Judith M; Feng, Wuwei; Freed, Mitchell J; Geis, Carolyn; Greenberg, Jason; Gudesblatt, Mark; Ikramuddin, Farha; Jayaraman, Arun; Kautz, Steven A; Lutsep, Helmi L; Madhavan, Sangeetha; Meilahn, Jill; Pease, William S; Rao, Noel; Seetharama, Subramani; Sethi, Pramod; Turk, Margaret A; Wallis, Roi Ann; Kufta, Conrad

    2015-01-01

    Evidence supports peroneal nerve functional electrical stimulation (FES) as an effective alternative to ankle foot orthoses (AFO) for treatment of foot drop poststroke, but few long-term, randomized controlled comparisons exist. Compare changes in gait quality and function between FES and AFOs in individuals with foot drop poststroke over a 12-month period. Follow-up analysis of an unblinded randomized controlled trial (ClinicalTrials.gov #NCT01087957) conducted at 30 rehabilitation centers comparing FES to AFOs over 6 months. Subjects continued to wear their randomized device for another 6 months to final 12-month assessments. Subjects used study devices for all home and community ambulation. Multiply imputed intention-to-treat analyses were utilized; primary endpoints were tested for noninferiority and secondary endpoints for superiority. Primary endpoints: 10 Meter Walk Test (10MWT) and device-related serious adverse event rate. Secondary endpoints: 6-Minute Walk Test (6MWT), GaitRite Functional Ambulation Profile, and Modified Emory Functional Ambulation Profile (mEFAP). A total of 495 subjects were randomized, and 384 completed the 12-month follow-up. FES proved noninferior to AFOs for all primary endpoints. Both FES and AFO groups showed statistically and clinically significant improvement for 10MWT compared with initial measurement. No statistically significant between-group differences were found for primary or secondary endpoints. The FES group demonstrated statistically significant improvements for 6MWT and mEFAP Stair-time subscore. At 12 months, both FES and AFOs continue to demonstrate equivalent gains in gait speed. Results suggest that long-term FES use may lead to additional improvements in walking endurance and functional ambulation; further research is needed to confirm these findings. © The Author(s) 2015.

  14. The explicit form of the rate function for semi-Markov processes and its contractions

    NASA Astrophysics Data System (ADS)

    Sughiyama, Yuki; Kobayashi, Testuya J.

    2018-03-01

    We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.

  15. A deterministic and stochastic model for the system dynamics of tumor-immune responses to chemotherapy

    NASA Astrophysics Data System (ADS)

    Liu, Xiangdong; Li, Qingze; Pan, Jianxin

    2018-06-01

    Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.

  16. Reduced-order dynamic output feedback control of uncertain discrete-time Markov jump linear systems

    NASA Astrophysics Data System (ADS)

    Morais, Cecília F.; Braga, Márcio F.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.

    2017-11-01

    This paper deals with the problem of designing reduced-order robust dynamic output feedback controllers for discrete-time Markov jump linear systems (MJLS) with polytopic state space matrices and uncertain transition probabilities. Starting from a full order, mode-dependent and polynomially parameter-dependent dynamic output feedback controller, sufficient linear matrix inequality based conditions are provided for the existence of a robust reduced-order dynamic output feedback stabilising controller with complete, partial or none mode dependency assuring an upper bound to the ? or the ? norm of the closed-loop system. The main advantage of the proposed method when compared to the existing approaches is the fact that the dynamic controllers are exclusively expressed in terms of the decision variables of the problem. In other words, the matrices that define the controller realisation do not depend explicitly on the state space matrices associated with the modes of the MJLS. As a consequence, the method is specially suitable to handle order reduction or cluster availability constraints in the context of ? or ? dynamic output feedback control of discrete-time MJLS. Additionally, as illustrated by means of numerical examples, the proposed approach can provide less conservative results than other conditions in the literature.

  17. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    NASA Astrophysics Data System (ADS)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  18. A systems approach for analysis of high content screening assay data with topic modeling.

    PubMed

    Bisgin, Halil; Chen, Minjun; Wang, Yuping; Kelly, Reagan; Fang, Hong; Xu, Xiaowei; Tong, Weida

    2013-01-01

    High Content Screening (HCS) has become an important tool for toxicity assessment, partly due to its advantage of handling multiple measurements simultaneously. This approach has provided insight and contributed to the understanding of systems biology at cellular level. To fully realize this potential, the simultaneously measured multiple endpoints from a live cell should be considered in a probabilistic relationship to assess the cell's condition to response stress from a treatment, which poses a great challenge to extract hidden knowledge and relationships from these measurements. In this work, we applied a text mining method of Latent Dirichlet Allocation (LDA) to analyze cellular endpoints from in vitro HCS assays and related to the findings to in vivo histopathological observations. We measured multiple HCS assay endpoints for 122 drugs. Since LDA requires the data to be represented in document-term format, we first converted the continuous value of the measurements to the word frequency that can processed by the text mining tool. For each of the drugs, we generated a document for each of the 4 time points. Thus, we ended with 488 documents (drug-hour) each having different values for the 10 endpoints which are treated as words. We extracted three topics using LDA and examined these to identify diagnostic topics for 45 common drugs located in vivo experiments from the Japanese Toxicogenomics Project (TGP) observing their necrosis findings at 6 and 24 hours after treatment. We found that assay endpoints assigned to particular topics were in concordance with the histopathology observed. Drugs showing necrosis at 6 hour were linked to severe damage events such as Steatosis, DNA Fragmentation, Mitochondrial Potential, and Lysosome Mass. DNA Damage and Apoptosis were associated with drugs causing necrosis at 24 hours, suggesting an interplay of the two pathways in these drugs. Drugs with no sign of necrosis we related to the Cell Loss and Nuclear Size assays, which is suggestive of hepatocyte regeneration. The evidence from this study suggests that topic modeling with LDA can enable us to interpret relationships of endpoints of in vitro assays along with an in vivo histological finding, necrosis. Effectiveness of this approach may add substantially to our understanding of systems biology.

  19. Markov Analysis of Sleep Dynamics

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.

    2009-05-01

    A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.

  20. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  1. Studies of regional-scale climate variability and change. Hidden Markov models and coupled ocean-atmosphere modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghil, M.; Kravtsov, S.; Robertson, A. W.

    2008-10-14

    This project was a continuation of previous work under DOE CCPP funding, in which we had developed a twin approach of probabilistic network (PN) models (sometimes called dynamic Bayesian networks) and intermediate-complexity coupled ocean-atmosphere models (ICMs) to identify the predictable modes of climate variability and to investigate their impacts on the regional scale. We had developed a family of PNs (similar to Hidden Markov Models) to simulate historical records of daily rainfall, and used them to downscale GCM seasonal predictions. Using an idealized atmospheric model, we had established a novel mechanism through which ocean-induced sea-surface temperature (SST) anomalies might influencemore » large-scale atmospheric circulation patterns on interannual and longer time scales; we had found similar patterns in a hybrid coupled ocean-atmosphere-sea-ice model. The goal of the this continuation project was to build on these ICM results and PN model development to address prediction of rainfall and temperature statistics at the local scale, associated with global climate variability and change, and to investigate the impact of the latter on coupled ocean-atmosphere modes. Our main results from the grant consist of extensive further development of the hidden Markov models for rainfall simulation and downscaling together with the development of associated software; new intermediate coupled models; a new methodology of inverse modeling for linking ICMs with observations and GCM results; and, observational studies of decadal and multi-decadal natural climate results, informed by ICM results.« less

  2. Conditional power and predictive power based on right censored data with supplementary auxiliary information.

    PubMed

    Sun, Libo; Wan, Ying

    2018-04-22

    Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Irreversible Markov chains in spin models: Topological excitations

    NASA Astrophysics Data System (ADS)

    Lei, Ze; Krauth, Werner

    2018-01-01

    We analyze the convergence of the irreversible event-chain Monte Carlo algorithm for continuous spin models in the presence of topological excitations. In the two-dimensional XY model, we show that the local nature of the Markov-chain dynamics leads to slow decay of vortex-antivortex correlations while spin waves decorrelate very quickly. Using a Fréchet description of the maximum vortex-antivortex distance, we quantify the contributions of topological excitations to the equilibrium correlations, and show that they vary from a dynamical critical exponent z∼ 2 at the critical temperature to z∼ 0 in the limit of zero temperature. We confirm the event-chain algorithm's fast relaxation (corresponding to z = 0) of spin waves in the harmonic approximation to the XY model. Mixing times (describing the approach towards equilibrium from the least favorable initial state) however remain much larger than equilibrium correlation times at low temperatures. We also describe the respective influence of topological monopole-antimonopole excitations and of spin waves on the event-chain dynamics in the three-dimensional Heisenberg model.

  4. Continuous-time discrete-space models for animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.

    2015-01-01

    The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.

  5. Fitting mechanistic epidemic models to data: A comparison of simple Markov chain Monte Carlo approaches.

    PubMed

    Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M

    2018-07-01

    Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).

  6. Using Markov Chains and Multi-Objective Optimization for Energy-Efficient Context Recognition.

    PubMed

    Janko, Vito; Luštrek, Mitja

    2017-12-29

    The recognition of the user's context with wearable sensing systems is a common problem in ubiquitous computing. However, the typically small battery of such systems often makes continuous recognition impractical. The strain on the battery can be reduced if the sensor setting is adapted to each context. We propose a method that efficiently finds near-optimal sensor settings for each context. It uses Markov chains to simulate the behavior of the system in different configurations and the multi-objective genetic algorithm to find a set of good non-dominated configurations. The method was evaluated on three real-life datasets and found good trade-offs between the system's energy expenditure and the system's accuracy. One of the solutions, for example, consumed five-times less energy than the default one, while sacrificing only two percentage points of accuracy.

  7. Search for gravitational waves from Scorpius X-1 in the first Advanced LIGO observing run with a hidden Markov model

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allen, G.; Allocca, A.; Almoubayyed, H.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bawaj, M.; Bazzan, M.; Bécsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Etienne, Z. B.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; De, S.; DeBra, D.; Deelman, E.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Duncan, J.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gabel, M.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garufi, F.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, S.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J.-M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, W.; Kim, W. S.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, H. W.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña Hernandez, I.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markakis, C.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mayani, R.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Ng, K. K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Ramirez, K. E.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosińska, D.; Ross, M. P.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Rynge, M.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A. A.; Shahriar, M. S.; Shao, L.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Smith, R. J. E.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, J. A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahi, K.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, M.; Wang, Y.-F.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, K. W. K.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; ZadroŻny, A.; Zanolin, M.; Zelenova, T.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.-H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; Suvorova, S.; Moran, W.; Evans, R. J.; LIGO Scientific Collaboration; Virgo Collaboration

    2017-06-01

    Results are presented from a semicoherent search for continuous gravitational waves from the brightest low-mass X-ray binary, Scorpius X-1, using data collected during the first Advanced LIGO observing run. The search combines a frequency domain matched filter (Bessel-weighted F -statistic) with a hidden Markov model to track wandering of the neutron star spin frequency. No evidence of gravitational waves is found in the frequency range 60-650 Hz. Frequentist 95% confidence strain upper limits, h095 %=4.0 ×1 0-25, 8.3 ×1 0-25, and 3.0 ×1 0-25 for electromagnetically restricted source orientation, unknown polarization, and circular polarization, respectively, are reported at 106 Hz. They are ≤10 times higher than the theoretical torque-balance limit at 106 Hz.

  8. Entanglement revival can occur only when the system-environment state is not a Markov state

    NASA Astrophysics Data System (ADS)

    Sargolzahi, Iman

    2018-06-01

    Markov states have been defined for tripartite quantum systems. In this paper, we generalize the definition of the Markov states to arbitrary multipartite case and find the general structure of an important subset of them, which we will call strong Markov states. In addition, we focus on an important property of the Markov states: If the initial state of the whole system-environment is a Markov state, then each localized dynamics of the whole system-environment reduces to a localized subdynamics of the system. This provides us a necessary condition for entanglement revival in an open quantum system: Entanglement revival can occur only when the system-environment state is not a Markov state. To illustrate (a part of) our results, we consider the case that the environment is modeled as classical. In this case, though the correlation between the system and the environment remains classical during the evolution, the change of the state of the system-environment, from its initial Markov state to a state which is not a Markov one, leads to the entanglement revival in the system. This shows that the non-Markovianity of a state is not equivalent to the existence of non-classical correlation in it, in general.

  9. Multivariate generalized hidden Markov regression models with random covariates: Physical exercise in an elderly population.

    PubMed

    Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello

    2018-04-22

    A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Linear system identification via backward-time observer models

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1993-01-01

    This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.

  11. Invariant graphs of a family of non-uniformly expanding skew products over Markov maps

    NASA Astrophysics Data System (ADS)

    Walkden, C. P.; Withers, T.

    2018-06-01

    We consider a family of skew-products of the form where T is a continuous, expanding, locally eventually onto Markov map and is a family of homeomorphisms of . A function is said to be an invariant graph if is an invariant set for the skew-product; equivalently, u(T(x))  =  g x (u(x)). A well-studied problem is to consider the existence, regularity and dimension-theoretic properties of such functions, usually under strong contraction or expansion conditions (in terms of Lyapunov exponents or partial hyperbolicity) in the fibre direction. Here we consider such problems in a setting where the Lyapunov exponent in the fibre direction is zero on a set of periodic orbits but expands except on a neighbourhood of these periodic orbits. We prove that u either has the structure of a ‘quasi-graph’ (or ‘bony graph’) or is as smooth as the dynamics, and we give a criteria for this to happen.

  12. Improvement in latent variable indirect response joint modeling of a continuous and a categorical clinical endpoint in rheumatoid arthritis.

    PubMed

    Hu, Chuanpu; Zhou, Honghui

    2016-02-01

    Improving the quality of exposure-response modeling is important in clinical drug development. The general joint modeling of multiple endpoints is made possible in part by recent progress on the latent variable indirect response (IDR) modeling for ordered categorical endpoints. This manuscript aims to investigate, when modeling a continuous and a categorical clinical endpoint, the level of improvement achievable by joint modeling in the latent variable IDR modeling framework through the sharing of model parameters for the individual endpoints, guided by the appropriate representation of drug and placebo mechanism. This was illustrated with data from two phase III clinical trials of intravenously administered mAb X for the treatment of rheumatoid arthritis, with the 28-joint disease activity score (DAS28) and 20, 50, and 70% improvement in the American College of Rheumatology (ACR20, ACR50, and ACR70) disease severity criteria were used as efficacy endpoints. The joint modeling framework led to a parsimonious final model with reasonable performance, evaluated by visual predictive check. The results showed that, compared with the more common approach of separately modeling the endpoints, it is possible for the joint model to be more parsimonious and yet better describe the individual endpoints. In particular, the joint model may better describe one endpoint through subject-specific random effects that would not have been estimable from data of this endpoint alone.

  13. Shape and Steepness of Toxicological Dose-Response Relationships of Continuous Endpoints

    EPA Science Inventory

    A re-analysis of a large number of historical dose-response data for continuous endpoints indicates that an exponential or a Hill model with four parameters both adequately describe toxicological dose-responses. The four parameters relate to the background response, the potency o...

  14. Indexed semi-Markov process for wind speed modeling.

    NASA Astrophysics Data System (ADS)

    Petroni, F.; D'Amico, G.; Prattico, F.

    2012-04-01

    The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [1] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [3], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. In a previous work we proposed different semi-Markov models, showing their ability to reproduce the autocorrelation structures of wind speed data. In that paper we showed also that the autocorrelation is higher with respect to the Markov model. Unfortunately this autocorrelation was still too small compared to the empirical one. In order to overcome the problem of low autocorrelation, in this paper we propose an indexed semi-Markov model. More precisely we assume that wind speed is described by a discrete time homogeneous semi-Markov process. We introduce a memory index which takes into account the periods of different wind activities. With this model the statistical characteristics of wind speed are faithfully reproduced. The wind is a very unstable phenomenon characterized by a sequence of lulls and sustained speeds, and a good wind generator must be able to reproduce such sequences. To check the validity of the predictive semi-Markovian model, the persistence of synthetic winds were calculated, then averaged and computed. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic generation of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Renewable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribution, Renewable Energy 28 (2003) 1787-1802.

  15. On the road to somewhere: Brain potentials reflect language effects on motion event perception.

    PubMed

    Flecken, Monique; Athanasopoulos, Panos; Kuipers, Jan Rouke; Thierry, Guillaume

    2015-08-01

    Recent studies have identified neural correlates of language effects on perception in static domains of experience such as colour and objects. The generalization of such effects to dynamic domains like motion events remains elusive. Here, we focus on grammatical differences between languages relevant for the description of motion events and their impact on visual scene perception. Two groups of native speakers of German or English were presented with animated videos featuring a dot travelling along a trajectory towards a geometrical shape (endpoint). English is a language with grammatical aspect in which attention is drawn to trajectory and endpoint of motion events equally. German, in contrast, is a non-aspect language which highlights endpoints. We tested the comparative perceptual saliency of trajectory and endpoint of motion events by presenting motion event animations (primes) followed by a picture symbolising the event (target): In 75% of trials, the animation was followed by a mismatching picture (both trajectory and endpoint were different); in 10% of trials, only the trajectory depicted in the picture matched the prime; in 10% of trials, only the endpoint matched the prime; and in 5% of trials both trajectory and endpoint were matching, which was the condition requiring a response from the participant. In Experiment 1 we recorded event-related brain potentials elicited by the picture in native speakers of German and native speakers of English. German participants exhibited a larger P3 wave in the endpoint match than the trajectory match condition, whereas English speakers showed no P3 amplitude difference between conditions. In Experiment 2 participants performed a behavioural motion matching task using the same stimuli as those used in Experiment 1. German and English participants did not differ in response times showing that motion event verbalisation cannot readily account for the difference in P3 amplitude found in the first experiment. We argue that, even in a non-verbal context, the grammatical properties of the native language and associated sentence-level patterns of event encoding influence motion event perception, such that attention is automatically drawn towards aspects highlighted by the grammar. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  16. H2-control and the separation principle for discrete-time jump systems with the Markov chain in a general state space

    NASA Astrophysics Data System (ADS)

    Figueiredo, Danilo Zucolli; Costa, Oswaldo Luiz do Valle

    2017-10-01

    This paper deals with the H2 optimal control problem of discrete-time Markov jump linear systems (MJLS) considering the case in which the Markov chain takes values in a general Borel space ?. It is assumed that the controller has access only to an output variable and to the jump parameter. The goal, in this case, is to design a dynamic Markov jump controller such that the H2-norm of the closed-loop system is minimised. It is shown that the H2-norm can be written as the sum of two H2-norms, such that one of them does not depend on the control, and the other one is obtained from the optimal filter for an infinite-horizon filtering problem. This result can be seen as a separation principle for MJLS with Markov chain in a Borel space ? considering the infinite time horizon case.

  17. Improved longitudinal gray and white matter atrophy assessment via application of a 4-dimensional hidden Markov random field model.

    PubMed

    Dwyer, Michael G; Bergsland, Niels; Zivadinov, Robert

    2014-04-15

    SIENA and similar techniques have demonstrated the utility of performing "direct" measurements as opposed to post-hoc comparison of cross-sectional data for the measurement of whole brain (WB) atrophy over time. However, gray matter (GM) and white matter (WM) atrophy are now widely recognized as important components of neurological disease progression, and are being actively evaluated as secondary endpoints in clinical trials. Direct measures of GM/WM change with advantages similar to SIENA have been lacking. We created a robust and easily-implemented method for direct longitudinal analysis of GM/WM atrophy, SIENAX multi-time-point (SIENAX-MTP). We built on the basic halfway-registration and mask composition components of SIENA to improve the raw output of FMRIB's FAST tissue segmentation tool. In addition, we created LFAST, a modified version of FAST incorporating a 4th dimension in its hidden Markov random field model in order to directly represent time. The method was validated by scan-rescan, simulation, comparison with SIENA, and two clinical effect size comparisons. All validation approaches demonstrated improved longitudinal precision with the proposed SIENAX-MTP method compared to SIENAX. For GM, simulation showed better correlation with experimental volume changes (r=0.992 vs. 0.941), scan-rescan showed lower standard deviations (3.8% vs. 8.4%), correlation with SIENA was more robust (r=0.70 vs. 0.53), and effect sizes were improved by up to 68%. Statistical power estimates indicated a potential drop of 55% in the number of subjects required to detect the same treatment effect with SIENAX-MTP vs. SIENAX. The proposed direct GM/WM method significantly improves on the standard SIENAX technique by trading a small amount of bias for a large reduction in variance, and may provide more precise data and additional statistical power in longitudinal studies. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. [Clinical end-points and surrogate markers of pulmonary arterial hypertension in the light of evidence-based treatment].

    PubMed

    Can, Mehmet Mustafa; Kaymaz, Cihangir

    2010-08-01

    Pulmonary arterial hypertension (PAH) is a rare, fatal and progressive disease. There is an acceleration in the advent of new therapies in parallel to the development of the knowledge about etiogenesis and pathogenesis of PAH. Therefore, to optimize the goals of PAH-specific treatment and to determine the time to shift from monotherapy to combination therapy, simple, objective and reproducible end-points, which may predict the disease severity, progression rate and life expectancy are needed. The adventure of end points in PAH has started with six minute walk distance and functional capacity, and continues with new parameters (biochemical marker, time to clinical worsening, echocardiography and magnetic resonance imaging etc.), which can better reflect the clinical outcome.

  19. Communication: Introducing prescribed biases in out-of-equilibrium Markov models

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.

    2018-03-01

    Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.

  20. Markov Chain Model with Catastrophe to Determine Mean Time to Default of Credit Risky Assets

    NASA Astrophysics Data System (ADS)

    Dharmaraja, Selvamuthu; Pasricha, Puneet; Tardelli, Paola

    2017-11-01

    This article deals with the problem of probabilistic prediction of the time distance to default for a firm. To model the credit risk, the dynamics of an asset is described as a function of a homogeneous discrete time Markov chain subject to a catastrophe, the default. The behaviour of the Markov chain is investigated and the mean time to the default is expressed in a closed form. The methodology to estimate the parameters is given. Numerical results are provided to illustrate the applicability of the proposed model on real data and their analysis is discussed.

  1. An Analytical Framework for Runtime of a Class of Continuous Evolutionary Algorithms.

    PubMed

    Zhang, Yushan; Hu, Guiwu

    2015-01-01

    Although there have been many studies on the runtime of evolutionary algorithms in discrete optimization, relatively few theoretical results have been proposed on continuous optimization, such as evolutionary programming (EP). This paper proposes an analysis of the runtime of two EP algorithms based on Gaussian and Cauchy mutations, using an absorbing Markov chain. Given a constant variation, we calculate the runtime upper bound of special Gaussian mutation EP and Cauchy mutation EP. Our analysis reveals that the upper bounds are impacted by individual number, problem dimension number n, searching range, and the Lebesgue measure of the optimal neighborhood. Furthermore, we provide conditions whereby the average runtime of the considered EP can be no more than a polynomial of n. The condition is that the Lebesgue measure of the optimal neighborhood is larger than a combinatorial calculation of an exponential and the given polynomial of n.

  2. Estimation of sojourn time in chronic disease screening without data on interval cases.

    PubMed

    Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W

    2000-03-01

    Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.

  3. Clustering Multivariate Time Series Using Hidden Markov Models

    PubMed Central

    Ghassempour, Shima; Girosi, Federico; Maeder, Anthony

    2014-01-01

    In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs), where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers. PMID:24662996

  4. Phasic Triplet Markov Chains.

    PubMed

    El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar

    2014-11-01

    Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.

  5. First and second order semi-Markov chains for wind speed modeling

    NASA Astrophysics Data System (ADS)

    Prattico, F.; Petroni, F.; D'Amico, G.

    2012-04-01

    The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [3] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [1], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. Semi-Markov processes (SMP) are a wide class of stochastic processes which generalize at the same time both Markov chains and renewal processes. Their main advantage is that of using whatever type of waiting time distribution for modeling the time to have a transition from one state to another one. This major flexibility has a price to pay: availability of data to estimate the parameters of the model which are more numerous. Data availability is not an issue in wind speed studies, therefore, semi-Markov models can be used in a statistical efficient way. In this work we present three different semi-Markov chain models: the first one is a first-order SMP where the transition probabilities from two speed states (at time Tn and Tn-1) depend on the initial state (the state at Tn-1), final state (the state at Tn) and on the waiting time (given by t=Tn-Tn-1), the second model is a second order SMP where we consider the transition probabilities as depending also on the state the wind speed was before the initial state (which is the state at Tn-2) and the last one is still a second order SMP where the transition probabilities depends on the three states at Tn-2,Tn-1 and Tn and on the waiting times t_1=Tn-1-Tn-2 and t_2=Tn-Tn-1. The three models are used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribution, Renewable Energy, 28/2003 1787-1802. [2] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic generation of wind speed time series, Energy 30/2005 693-708. [3] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Renewable Energy 29/2004, 1407-1418.

  6. Stochastic differential equation model for linear growth birth and death processes with immigration and emigration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granita, E-mail: granitafc@gmail.com; Bahar, A.

    This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.

  7. Markov chains of infinite order and asymptotic satisfaction of balance: application to the adaptive integration method.

    PubMed

    Earl, David J; Deem, Michael W

    2005-04-14

    Adaptive Monte Carlo methods can be viewed as implementations of Markov chains with infinite memory. We derive a general condition for the convergence of a Monte Carlo method whose history dependence is contained within the simulated density distribution. In convergent cases, our result implies that the balance condition need only be satisfied asymptotically. As an example, we show that the adaptive integration method converges.

  8. A Langevin equation for the rates of currency exchange based on the Markov analysis

    NASA Astrophysics Data System (ADS)

    Farahpour, F.; Eskandari, Z.; Bahraminasab, A.; Jafari, G. R.; Ghasemi, F.; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2007-11-01

    We propose a method for analyzing the data for the rates of exchange of various currencies versus the U.S. dollar. The method analyzes the return time series of the data as a Markov process, and develops an effective equation which reconstructs it. We find that the Markov time scale, i.e., the time scale over which the data are Markov-correlated, is one day for the majority of the daily exchange rates that we analyze. We derive an effective Langevin equation to describe the fluctuations in the rates. The equation contains two quantities, D and D, representing the drift and diffusion coefficients, respectively. We demonstrate how the two coefficients are estimated directly from the data, without using any assumptions or models for the underlying stochastic time series that represent the daily rates of exchange of various currencies versus the U.S. dollar.

  9. Density Control of Multi-Agent Systems with Safety Constraints: A Markov Chain Approach

    NASA Astrophysics Data System (ADS)

    Demirer, Nazli

    The control of systems with autonomous mobile agents has been a point of interest recently, with many applications like surveillance, coverage, searching over an area with probabilistic target locations or exploring an area. In all of these applications, the main goal of the swarm is to distribute itself over an operational space to achieve mission objectives specified by the density of swarm. This research focuses on the problem of controlling the distribution of multi-agent systems considering a hierarchical control structure where the whole swarm coordination is achieved at the high-level and individual vehicle/agent control is managed at the low-level. High-level coordination algorithms uses macroscopic models that describes the collective behavior of the whole swarm and specify the agent motion commands, whose execution will lead to the desired swarm behavior. The low-level control laws execute the motion to follow these commands at the agent level. The main objective of this research is to develop high-level decision control policies and algorithms to achieve physically realizable commanding of the agents by imposing mission constraints on the distribution. We also make some connections with decentralized low-level motion control. This dissertation proposes a Markov chain based method to control the density distribution of the whole system where the implementation can be achieved in a decentralized manner with no communication between agents since establishing communication with large number of agents is highly challenging. The ultimate goal is to guide the overall density distribution of the system to a prescribed steady-state desired distribution while satisfying desired transition and safety constraints. Here, the desired distribution is determined based on the mission requirements, for example in the application of area search, the desired distribution should match closely with the probabilistic target locations. The proposed method is applicable for both systems with a single agent and systems with large number of agents due to the probabilistic nature, where the probability distribution of each agent's state evolves according to a finite-state and discrete-time Markov chain (MC). Hence, designing proper decision control policies requires numerically tractable solution methods for the synthesis of Markov chains. The synthesis problem has the form of a Linear Matrix Inequality Problem (LMI), with LMI formulation of the constraints. To this end, we propose convex necessary and sufficient conditions for safety constraints in Markov chains, which is a novel result in the Markov chain literature. In addition to LMI-based, offline, Markov matrix synthesis method, we also propose a QP-based, online, method to compute a time-varying Markov matrix based on the real-time density feedback. Both problems are convex optimization problems that can be solved in a reliable and tractable way, utilizing existing tools in the literature. A Low Earth Orbit (LEO) swarm simulations are presented to validate the effectiveness of the proposed algorithms. Another problem tackled as a part of this research is the generalization of the density control problem to autonomous mobile agents with two control modes: ON and OFF. Here, each mode consists of a (possibly overlapping) finite set of actions, that is, there exist a set of actions for the ON mode and another set for the OFF mode. We give formulation for a new Markov chain synthesis problem, with additional measurements for the state transitions, where a policy is designed to ensure desired safety and convergence properties for the underlying Markov chain.

  10. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  11. Markovian prediction of future values for food grains in the economic survey

    NASA Astrophysics Data System (ADS)

    Sathish, S.; Khadar Babu, S. K.

    2017-11-01

    Now-a-days prediction and forecasting are plays a vital role in research. For prediction, regression is useful to predict the future value and current value on production process. In this paper, we assume food grain production exhibit Markov chain dependency and time homogeneity. The economic generative performance evaluation the balance time artificial fertilization different level in Estrusdetection using a daily Markov chain model. Finally, Markov process prediction gives better performance compare with Regression model.

  12. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  13. Relationship between mass-flux reduction and source-zone mass removal: analysis of field data.

    PubMed

    Difilippo, Erica L; Brusseau, Mark L

    2008-05-26

    The magnitude of contaminant mass-flux reduction associated with a specific amount of contaminant mass removed is a key consideration for evaluating the effectiveness of a source-zone remediation effort. Thus, there is great interest in characterizing, estimating, and predicting relationships between mass-flux reduction and mass removal. Published data collected for several field studies were examined to evaluate relationships between mass-flux reduction and source-zone mass removal. The studies analyzed herein represent a variety of source-zone architectures, immiscible-liquid compositions, and implemented remediation technologies. There are two general approaches to characterizing the mass-flux-reduction/mass-removal relationship, end-point analysis and time-continuous analysis. End-point analysis, based on comparing masses and mass fluxes measured before and after a source-zone remediation effort, was conducted for 21 remediation projects. Mass removals were greater than 60% for all but three of the studies. Mass-flux reductions ranging from slightly less than to slightly greater than one-to-one were observed for the majority of the sites. However, these single-snapshot characterizations are limited in that the antecedent behavior is indeterminate. Time-continuous analysis, based on continuous monitoring of mass removal and mass flux, was performed for two sites, both for which data were obtained under water-flushing conditions. The reductions in mass flux were significantly different for the two sites (90% vs. approximately 8%) for similar mass removals ( approximately 40%). These results illustrate the dependence of the mass-flux-reduction/mass-removal relationship on source-zone architecture and associated mass-transfer processes. Minimal mass-flux reduction was observed for a system wherein mass removal was relatively efficient (ideal mass-transfer and displacement). Conversely, a significant degree of mass-flux reduction was observed for a site wherein mass removal was inefficient (non-ideal mass-transfer and displacement). The mass-flux-reduction/mass-removal relationship for the latter site exhibited a multi-step behavior, which cannot be predicted using some of the available simple estimation functions.

  14. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models

    PubMed Central

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful. PMID:28626348

  15. GPU-powered Shotgun Stochastic Search for Dirichlet process mixtures of Gaussian Graphical Models.

    PubMed

    Mukherjee, Chiranjit; Rodriguez, Abel

    2016-01-01

    Gaussian graphical models are popular for modeling high-dimensional multivariate data with sparse conditional dependencies. A mixture of Gaussian graphical models extends this model to the more realistic scenario where observations come from a heterogenous population composed of a small number of homogeneous sub-groups. In this paper we present a novel stochastic search algorithm for finding the posterior mode of high-dimensional Dirichlet process mixtures of decomposable Gaussian graphical models. Further, we investigate how to harness the massive thread-parallelization capabilities of graphical processing units to accelerate computation. The computational advantages of our algorithms are demonstrated with various simulated data examples in which we compare our stochastic search with a Markov chain Monte Carlo algorithm in moderate dimensional data examples. These experiments show that our stochastic search largely outperforms the Markov chain Monte Carlo algorithm in terms of computing-times and in terms of the quality of the posterior mode discovered. Finally, we analyze a gene expression dataset in which Markov chain Monte Carlo algorithms are too slow to be practically useful.

  16. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  17. Biomarkers and Surrogate Endpoints in Drug Development: A European Regulatory View.

    PubMed

    Wickström, Kerstin; Moseley, Jane

    2017-05-01

    To give a European regulatory overview of the requirements on and the use of biomarkers or surrogate endpoints in the development of drugs for ocular disease. Definitions, methods to validate new markers, and circumstances where surrogate endpoints can be appropriate are summarized. The key endpoints that have been used in registration studies so far are based on visual acuity, signs, and symptoms, or on surrogate endpoints. In some ocular conditions, established outcome measures such as those based on visual acuity or visual field are not feasible (as with slowly progressing diseases), or lack relevance (e.g., when central visual acuity may be preserved even though the patient is legally blind owing to a severely restricted visual field, or vice versa). There are several ocular conditions for which there is an unmet medical need. In some of these conditions, surrogate endpoints as well as new clinical endpoints are needed to help speed up patient access to new medicines. Interaction with European regulators through the pathway specific for the development of biomarkers or novel methods is encouraged.

  18. Refining value-at-risk estimates using a Bayesian Markov-switching GJR-GARCH copula-EVT model.

    PubMed

    Sampid, Marius Galabe; Hasim, Haslifah M; Dai, Hongsheng

    2018-01-01

    In this paper, we propose a model for forecasting Value-at-Risk (VaR) using a Bayesian Markov-switching GJR-GARCH(1,1) model with skewed Student's-t innovation, copula functions and extreme value theory. A Bayesian Markov-switching GJR-GARCH(1,1) model that identifies non-constant volatility over time and allows the GARCH parameters to vary over time following a Markov process, is combined with copula functions and EVT to formulate the Bayesian Markov-switching GJR-GARCH(1,1) copula-EVT VaR model, which is then used to forecast the level of risk on financial asset returns. We further propose a new method for threshold selection in EVT analysis, which we term the hybrid method. Empirical and back-testing results show that the proposed VaR models capture VaR reasonably well in periods of calm and in periods of crisis.

  19. Oncology Modeling for Fun and Profit! Key Steps for Busy Analysts in Health Technology Assessment.

    PubMed

    Beca, Jaclyn; Husereau, Don; Chan, Kelvin K W; Hawkins, Neil; Hoch, Jeffrey S

    2018-01-01

    In evaluating new oncology medicines, two common modeling approaches are state transition (e.g., Markov and semi-Markov) and partitioned survival. Partitioned survival models have become more prominent in oncology health technology assessment processes in recent years. Our experience in conducting and evaluating models for economic evaluation has highlighted many important and practical pitfalls. As there is little guidance available on best practices for those who wish to conduct them, we provide guidance in the form of 'Key steps for busy analysts,' who may have very little time and require highly favorable results. Our guidance highlights the continued need for rigorous conduct and transparent reporting of economic evaluations regardless of the modeling approach taken, and the importance of modeling that better reflects reality, which includes better approaches to considering plausibility, estimating relative treatment effects, dealing with post-progression effects, and appropriate characterization of the uncertainty from modeling itself.

  20. Using Markov Chains and Multi-Objective Optimization for Energy-Efficient Context Recognition †

    PubMed Central

    Janko, Vito

    2017-01-01

    The recognition of the user’s context with wearable sensing systems is a common problem in ubiquitous computing. However, the typically small battery of such systems often makes continuous recognition impractical. The strain on the battery can be reduced if the sensor setting is adapted to each context. We propose a method that efficiently finds near-optimal sensor settings for each context. It uses Markov chains to simulate the behavior of the system in different configurations and the multi-objective genetic algorithm to find a set of good non-dominated configurations. The method was evaluated on three real-life datasets and found good trade-offs between the system’s energy expenditure and the system’s accuracy. One of the solutions, for example, consumed five-times less energy than the default one, while sacrificing only two percentage points of accuracy. PMID:29286301

  1. Nested Interrupt Analysis of Low Cost and High Performance Embedded Systems Using GSPN Framework

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    Interrupt service routines are a key technology for embedded systems. In this paper, we introduce the standard approach for using Generalized Stochastic Petri Nets (GSPNs) as a high-level model for generating CTMC Continuous-Time Markov Chains (CTMCs) and then use Markov Reward Models (MRMs) to compute the performance for embedded systems. This framework is employed to analyze two embedded controllers with low cost and high performance, ARM7 and Cortex-M3. Cortex-M3 is designed with a tail-chaining mechanism to improve the performance of ARM7 when a nested interrupt occurs on an embedded controller. The Platform Independent Petri net Editor 2 (PIPE2) tool is used to model and evaluate the controllers in terms of power consumption and interrupt overhead performance. Using numerical results, in spite of the power consumption or interrupt overhead, Cortex-M3 performs better than ARM7.

  2. Markov chain-incorporated and synthetic data-supported conditional artificial neural network models for forecasting monthly precipitation in arid regions

    NASA Astrophysics Data System (ADS)

    Aksoy, Hafzullah; Dahamsheh, Ahmad

    2018-07-01

    For forecasting monthly precipitation in an arid region, the feed forward back-propagation, radial basis function and generalized regression artificial neural networks (ANNs) are used in this study. The ANN models are improved after incorporation of a Markov chain-based algorithm (MC-ANNs) with which the percentage of dry months is forecasted perfectly, thus generation of any non-physical negative precipitation is eliminated. Due to the fact that recorded precipitation time series are usually shorter than the length needed for a proper calibration of ANN models, synthetic monthly precipitation data are generated by Thomas-Fiering model to further improve the performance of forecasting. For case studies from Jordan, it is seen that only a slightly better performance is achieved with the use of MC and synthetic data. A conditional statement is, therefore, established and imbedded into the ANN models after the incorporation of MC and support of synthetic data, to substantially improve the ability of the models for forecasting monthly precipitation in arid regions.

  3. When to stop managing or surveying cryptic threatened species

    PubMed Central

    Chadès, Iadine; McDonald-Madden, Eve; McCarthy, Michael A.; Wintle, Brendan; Linkie, Matthew; Possingham, Hugh P.

    2008-01-01

    Threatened species become increasingly difficult to detect as their populations decline. Managers of such cryptic threatened species face several dilemmas: if they are not sure the species is present, should they continue to manage for that species or invest the limited resources in surveying? We find optimal solutions to this problem using a Partially Observable Markov Decision Process and rules of thumb derived from an analytical approximation. We discover that managing a protected area for a cryptic threatened species can be optimal even if we are not sure the species is present. The more threatened and valuable the species is, relative to the costs of management, the more likely we are to manage this species without determining its continued persistence by using surveys. If a species remains unseen, our belief in the persistence of the species declines to a point where the optimal strategy is to shift resources from saving the species to surveying for it. Finally, when surveys lead to a sufficiently low belief that the species is extant, we surrender resources to other conservation actions. We illustrate our findings with a case study using parameters based on the critically endangered Sumatran tiger (Panthera tigris sumatrae), and we generate rules of thumb on how to allocate conservation effort for any cryptic species. Using Partially Observable Markov Decision Processes in conservation science, we determine the conditions under which it is better to abandon management for that species because our belief that it continues to exist is too low. PMID:18779594

  4. When to stop managing or surveying cryptic threatened species.

    PubMed

    Chadès, Iadine; McDonald-Madden, Eve; McCarthy, Michael A; Wintle, Brendan; Linkie, Matthew; Possingham, Hugh P

    2008-09-16

    Threatened species become increasingly difficult to detect as their populations decline. Managers of such cryptic threatened species face several dilemmas: if they are not sure the species is present, should they continue to manage for that species or invest the limited resources in surveying? We find optimal solutions to this problem using a Partially Observable Markov Decision Process and rules of thumb derived from an analytical approximation. We discover that managing a protected area for a cryptic threatened species can be optimal even if we are not sure the species is present. The more threatened and valuable the species is, relative to the costs of management, the more likely we are to manage this species without determining its continued persistence by using surveys. If a species remains unseen, our belief in the persistence of the species declines to a point where the optimal strategy is to shift resources from saving the species to surveying for it. Finally, when surveys lead to a sufficiently low belief that the species is extant, we surrender resources to other conservation actions. We illustrate our findings with a case study using parameters based on the critically endangered Sumatran tiger (Panthera tigris sumatrae), and we generate rules of thumb on how to allocate conservation effort for any cryptic species. Using Partially Observable Markov Decision Processes in conservation science, we determine the conditions under which it is better to abandon management for that species because our belief that it continues to exist is too low.

  5. Differential transfer processes in incremental visuomotor adaptation.

    PubMed

    Seidler, Rachel D

    2005-01-01

    Visuomotor adaptive processes were examined by testing transfer of adaptation between similar conditions. Participants made manual aiming movements with a joystick to hit targets on a computer screen, with real-time feedback display of their movement. They adapted to three different rotations of the display in a sequential fashion, with a return to baseline display conditions between rotations. Adaptation was better when participants had prior adaptive experiences. When performance was assessed using direction error (calculated at the time of peak velocity) and initial endpoint error (error before any overt corrective actions), transfer was greater when the final rotation reflected an addition of previously experienced rotations (adaptation order 30 degrees rotation, 15 degrees, 45 degrees) than when it was a subtraction of previously experienced conditions (adaptation order 45 degrees rotation, 15 degrees, 30 degrees). Transfer was equal regardless of adaptation order when performance was assessed with final endpoint error (error following any discrete, corrective actions). These results imply the existence of multiple independent processes in visuomotor adaptation.

  6. Modeling carbachol-induced hippocampal network synchronization using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Dragomir, Andrei; Akay, Yasemin M.; Akay, Metin

    2010-10-01

    In this work we studied the neural state transitions undergone by the hippocampal neural network using a hidden Markov model (HMM) framework. We first employed a measure based on the Lempel-Ziv (LZ) estimator to characterize the changes in the hippocampal oscillation patterns in terms of their complexity. These oscillations correspond to different modes of hippocampal network synchronization induced by the cholinergic agonist carbachol in the CA1 region of mice hippocampus. HMMs are then used to model the dynamics of the LZ-derived complexity signals as first-order Markov chains. Consequently, the signals corresponding to our oscillation recordings can be segmented into a sequence of statistically discriminated hidden states. The segmentation is used for detecting transitions in neural synchronization modes in data recorded from wild-type and triple transgenic mice models (3xTG) of Alzheimer's disease (AD). Our data suggest that transition from low-frequency (delta range) continuous oscillation mode into high-frequency (theta range) oscillation, exhibiting repeated burst-type patterns, occurs always through a mode resembling a mixture of the two patterns, continuous with burst. The relatively random patterns of oscillation during this mode may reflect the fact that the neuronal network undergoes re-organization. Further insight into the time durations of these modes (retrieved via the HMM segmentation of the LZ-derived signals) reveals that the mixed mode lasts significantly longer (p < 10-4) in 3xTG AD mice. These findings, coupled with the documented cholinergic neurotransmission deficits in the 3xTG mice model, may be highly relevant for the case of AD.

  7. IEEE 802.15.4 MAC with GTS transmission for heterogeneous devices with application to wheelchair body-area sensor networks.

    PubMed

    Shrestha, Bharat; Hossain, Ekram; Camorlinga, Sergio

    2011-09-01

    In wireless personal area networks, such as wireless body-area sensor networks, stations or devices have different bandwidth requirements and, thus, create heterogeneous traffics. For such networks, the IEEE 802.15.4 medium access control (MAC) can be used in the beacon-enabled mode, which supports guaranteed time slot (GTS) allocation for time-critical data transmissions. This paper presents a general discrete-time Markov chain model for the IEEE 802.15.4-based networks taking into account the slotted carrier sense multiple access with collision avoidance and GTS transmission phenomena together in the heterogeneous traffic scenario and under nonsaturated condition. For this purpose, the standard GTS allocation scheme is modified. For each non-identical device, the Markov model is solved and the average service time and the service utilization factor are analyzed in the non-saturated mode. The analysis is validated by simulations using network simulator version 2.33. Also, the model is enhanced with a wireless propagation model and the performance of the MAC is evaluated in a wheelchair body-area sensor network scenario.

  8. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    ERIC Educational Resources Information Center

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  9. Driving style recognition method using braking characteristics based on hidden Markov model

    PubMed Central

    Wu, Chaozhong; Lyu, Nengchao; Huang, Zhen

    2017-01-01

    Since the advantage of hidden Markov model in dealing with time series data and for the sake of identifying driving style, three driving style (aggressive, moderate and mild) are modeled reasonably through hidden Markov model based on driver braking characteristics to achieve efficient driving style. Firstly, braking impulse and the maximum braking unit area of vacuum booster within a certain time are collected from braking operation, and then general braking and emergency braking characteristics are extracted to code the braking characteristics. Secondly, the braking behavior observation sequence is used to describe the initial parameters of hidden Markov model, and the generation of the hidden Markov model for differentiating and an observation sequence which is trained and judged by the driving style is introduced. Thirdly, the maximum likelihood logarithm could be implied from the observable parameters. The recognition accuracy of algorithm is verified through experiments and two common pattern recognition algorithms. The results showed that the driving style discrimination based on hidden Markov model algorithm could realize effective discriminant of driving style. PMID:28837580

  10. Fuzzy Markov random fields versus chains for multispectral image segmentation.

    PubMed

    Salzenstein, Fabien; Collet, Christophe

    2006-11-01

    This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.

  11. The Discounted Method and Equivalence of Average Criteria for Risk-Sensitive Markov Decision Processes on Borel Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m

    2010-04-15

    This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less

  12. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

  13. A probabilistic model for analysing the effect of performance levels on visual behaviour patterns of young sailors in simulated navigation.

    PubMed

    Manzanares, Aarón; Menayo, Ruperto; Segado, Francisco; Salmerón, Diego; Cano, Juan Antonio

    2015-01-01

    The visual behaviour is a determining factor in sailing due to the influence of the environmental conditions. The aim of this research was to determine the visual behaviour pattern in sailors with different practice time in one star race, applying a probabilistic model based on Markov chains. The sample of this study consisted of 20 sailors, distributed in two groups, top ranking (n = 10) and bottom ranking (n = 10), all of them competed in the Optimist Class. An automated system of measurement, which integrates the VSail-Trainer sail simulator and the Eye Tracking System(TM) was used. The variables under consideration were the sequence of fixations and the fixation recurrence time performed on each location by the sailors. The event consisted of one of simulated regatta start, with stable conditions of wind, competitor and sea. Results show that top ranking sailors perform a low recurrence time on relevant locations and higher on irrelevant locations while bottom ranking sailors make a low recurrence time in most of the locations. The visual pattern performed by bottom ranking sailors is focused around two visual pivots, which does not happen in the top ranking sailor's pattern. In conclusion, the Markov chains analysis has allowed knowing the visual behaviour pattern of the top and bottom ranking sailors and its comparison.

  14. Hybrid stochastic simplifications for multiscale gene networks.

    PubMed

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-09-07

    Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  15. Detection of cough signals in continuous audio recordings using hidden Markov models.

    PubMed

    Matos, Sergio; Birring, Surinder S; Pavord, Ian D; Evans, David H

    2006-06-01

    Cough is a common symptom of many respiratory diseases. The evaluation of its intensity and frequency of occurrence could provide valuable clinical information in the assessment of patients with chronic cough. In this paper we propose the use of hidden Markov models (HMMs) to automatically detect cough sounds from continuous ambulatory recordings. The recording system consists of a digital sound recorder and a microphone attached to the patient's chest. The recognition algorithm follows a keyword-spotting approach, with cough sounds representing the keywords. It was trained on 821 min selected from 10 ambulatory recordings, including 2473 manually labeled cough events, and tested on a database of nine recordings from separate patients with a total recording time of 3060 min and comprising 2155 cough events. The average detection rate was 82% at a false alarm rate of seven events/h, when considering only events above an energy threshold relative to each recording's average energy. These results suggest that HMMs can be applied to the detection of cough sounds from ambulatory patients. A postprocessing stage to perform a more detailed analysis on the detected events is under development, and could allow the rejection of some of the incorrectly detected events.

  16. Bayesian Analysis of Biogeography when the Number of Areas is Large

    PubMed Central

    Landis, Michael J.; Matzke, Nicholas J.; Moore, Brian R.; Huelsenbeck, John P.

    2013-01-01

    Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a “data-augmentation” approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea. [ancestral area analysis; Bayesian biogeographic inference; data augmentation; historical biogeography; Markov chain Monte Carlo.] PMID:23736102

  17. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    PubMed Central

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  18. Markovian Interpretations of Dual Retrieval Processes

    PubMed Central

    Gomes, C. F. A.; Nakamura, K.; Reyna, V. F.

    2013-01-01

    A half-century ago, at the dawn of the all-or-none learning era, Estes showed that finite Markov chains supply a tractable, comprehensive framework for discrete-change data of the sort that he envisioned for shifts in conditioning states in stimulus sampling theory. Shortly thereafter, such data rapidly accumulated in many spheres of human learning and animal conditioning, and Estes’ work stimulated vigorous development of Markov models to handle them. A key outcome was that the data of the workhorse paradigms of episodic memory, recognition and recall, proved to be one- and two-stage Markovian, respectively, to close approximations. Subsequently, Markov modeling of recognition and recall all but disappeared from the literature, but it is now reemerging in the wake of dual-process conceptions of episodic memory. In recall, in particular, Markov models are being used to measure two retrieval operations (direct access and reconstruction) and a slave familiarity operation. In the present paper, we develop this family of models and present the requisite machinery for fit evaluation and significance testing. Results are reviewed from selected experiments in which the recall models were used to understand dual memory processes. PMID:24948840

  19. Technical manual for basic version of the Markov chain nest productivity model (MCnest)

    EPA Science Inventory

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  20. User’s manual for basic version of MCnest Markov chain nest productivity model

    EPA Science Inventory

    The Markov Chain Nest Productivity Model (or MCnest) integrates existing toxicity information from three standardized avian toxicity tests with information on species life history and the timing of pesticide applications relative to the timing of avian breeding seasons to quantit...

  1. Modelling past land use using archaeological and pollen data

    NASA Astrophysics Data System (ADS)

    Pirzamanbein, Behnaz; Lindström, johan; Poska, Anneli; Gaillard-Lemdahl, Marie-José

    2016-04-01

    Accurate maps of past land use are necessary for studying the impact of anthropogenic land-cover changes on climate and biodiversity. We develop a Bayesian hierarchical model to reconstruct the land use using Gaussian Markov random fields. The model uses two observations sets: 1) archaeological data, representing human settlements, urbanization and agricultural findings; and 2) pollen-based land estimates of the three land-cover types Coniferous forest, Broadleaved forest and Unforested/Open land. The pollen based estimates are obtained from the REVEALS model, based on pollen counts from lakes and bogs. Our developed model uses the sparse pollen-based estimations to reconstruct the spatial continuous cover of three land cover types. Using the open-land component and the archaeological data, the extent of land-use is reconstructed. The model is applied on three time periods - centred around 1900 CE, 1000 and, 4000 BCE over Sweden for which both pollen-based estimates and archaeological data are available. To estimate the model parameters and land use, a block updated Markov chain Monte Carlo (MCMC) algorithm is applied. Using the MCMC posterior samples uncertainties in land-use predictions are computed. Due to lack of good historic land use data, model results are evaluated by cross-validation. Keywords. Spatial reconstruction, Gaussian Markov random field, Fossil pollen records, Archaeological data, Human land-use, Prediction uncertainty

  2. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. A comparison between MS-VECM and MS-VECMX on economic time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Wai; Ismail, Mohd Tahir; Sek, Siok-Kun

    2014-07-01

    Multivariate Markov switching models able to provide useful information on the study of structural change data since the regime switching model can analyze the time varying data and capture the mean and variance in the series of dependence structure. This paper will investigates the oil price and gold price effects on Malaysia, Singapore, Thailand and Indonesia stock market returns. Two forms of Multivariate Markov switching models are used namely the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model (MSMH-VECM) and the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model with exogenous variable (MSMH-VECMX). The reason for using these two models are to capture the transition probabilities of the data since real financial time series data always exhibit nonlinear properties such as regime switching, cointegrating relations, jumps or breaks passing the time. A comparison between these two models indicates that MSMH-VECM model able to fit the time series data better than the MSMH-VECMX model. In addition, it was found that oil price and gold price affected the stock market changes in the four selected countries.

  4. Modelling of land use change in Indramayu District, West Java Province

    NASA Astrophysics Data System (ADS)

    Handayani, L. D. W.; Tejaningrum, M. A.; Damrah, F.

    2017-01-01

    Indramayu District into a strategic area for a stopover and overseas from East Java area because Indramayu District passed the north coast main lane, which is the first as the economic lifeblood of the Java Island. Indramayu District is part of mainstream economic Java pathways so that physical development of the area and population density as well as community activities grew by leaps and bounds. Growth acceleration raised the level of land use change. Land use change and population activities in coastal area would reduce the carrying capacity and impact on environmental quality. This research aim to analyse landuse change of years 2000 and 2011 in Indramayu District. Using this land use change map, we can predict the condition of landuse change of year 2022 in Indramayu District. Cellular Automata Markov (Markov CA) Method is used to create a spatial model of land use changes. The results of this study are predictive of land use in 2022 and the suitability with Spatial Plan (RTRW). A settlement increase predicted to continue in the future the designation of the land according to the spatial plan should be maintained.

  5. Assessment of variations in control of asthma over time.

    PubMed

    Combescure, C; Chanez, P; Saint-Pierre, P; Daurès, J P; Proudhon, H; Godard, P

    2003-08-01

    Control and severity of asthma are two different but complementary concepts. The severity of asthma could influence the control over time. The aim of this study was to demonstrate this relationship. A total 365 patients with persistent asthma (severity) were enrolled and followed-up prospectively. Data were analysed using a continuous time homogeneous Markov model of the natural history of asthma. Control of asthma was defined according to three health states which were qualified: optimal, suboptimal and unacceptable control (states 1, 2 and 3). Transition forces (denoted lambda(ij) from state i to state j) and transition probabilities between control states were assessed and the results stratified by asthma severity were compared. Models were validated by comparing expected and observed numbers of patients in the different states. Transition probabilities stabilised between 100-250 days and more rapidly in patients with mild-to-moderate asthma. Patients with mild-to-moderate asthma in suboptimal or unacceptable control had a high probability of transition directly to optimal control. Patients with severe asthma had a tendency to remain in unacceptable control. A Markov model is a useful tool to model the control of asthma over time. Severity modified clearly the health states. It could be used to compare the performance of different approaches to asthma management.

  6. Swimming speed alteration in the early developmental stages of Paracentrotus lividus sea urchin as ecotoxicological endpoint.

    PubMed

    Morgana, Silvia; Gambardella, Chiara; Falugi, Carla; Pronzato, Roberto; Garaventa, Francesca; Faimali, Marco

    2016-04-01

    Behavioral endpoints have been used for decades to assess chemical impacts at concentrations unlikely to cause mortality. With recently developed techniques, it is possible to investigate the swimming behavior of several organisms under laboratory conditions. The aims of this study were: i) assessing for the first time the feasibility of swimming speed analysis of the early developmental stage sea urchin Paracentrotus lividus by an automatic recording system ii) investigating any Swimming Speed Alteration (SSA) on P. lividus early stages exposed to a chemical reference; iii) identifying the most suitable stage for SSA test. Results show that the swimming speed of all the developmental stages was easily recorded. The swimming speed was inhibited as a function of toxicant concentration. Pluteus were the most appropriate stage for evaluating SSA in P. lividus as ecotoxicological endpoint. Finally, swimming of sea urchin early stages represents a sensitive endpoint to be considered in ecotoxicological investigations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Transition probabilities for general birth-death processes with applications in ecology, genetics, and evolution

    PubMed Central

    Crawford, Forrest W.; Suchard, Marc A.

    2011-01-01

    A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  9. Attention switching during scene perception: how goals influence the time course of eye movements across advertisements.

    PubMed

    Wedel, Michel; Pieters, Rik; Liechty, John

    2008-06-01

    Eye movements across advertisements express a temporal pattern of bursts of respectively relatively short and long saccades, and this pattern is systematically influenced by activated scene perception goals. This was revealed by a continuous-time hidden Markov model applied to eye movements of 220 participants exposed to 17 ads under a free-viewing condition, and a scene-learning goal (ad memorization), a scene-evaluation goal (ad appreciation), a target-learning goal (product learning), or a target-evaluation goal (product evaluation). The model reflects how attention switches between two states--local and global--expressed in saccades of shorter and longer amplitude on a spatial grid with 48 cells overlaid on the ads. During the 5- to 6-s duration of self-controlled exposure to ads in the magazine context, attention predominantly started in the local state and ended in the global state, and rapidly switched about 5 times between states. The duration of the local attention state was much longer than the duration of the global state. Goals affected the frequency of switching between attention states and the duration of the local, but not of the global, state. (c) 2008 APA, all rights reserved

  10. Finding exact constants in a Markov model of Zipfs law generation

    NASA Astrophysics Data System (ADS)

    Bochkarev, V. V.; Lerner, E. Yu.; Nikiforov, A. A.; Pismenskiy, A. A.

    2017-12-01

    According to the classical Zipfs law, the word frequency is a power function of the word rank with an exponent -1. The objective of this work is to find multiplicative constant in a Markov model of word generation. Previously, the case of independent letters was mathematically strictly investigated in [Bochkarev V V and Lerner E Yu 2017 International Journal of Mathematics and Mathematical Sciences Article ID 914374]. Unfortunately, the methods used in this paper cannot be generalized in case of Markov chains. The search of the correct formulation of the Markov generalization of this results was performed using experiments with different ergodic matrices of transition probability P. Combinatory technique allowed taking into account all the words with probability of more than e -300 in case of 2 by 2 matrices. It was experimentally proved that the required constant in the limit is equal to the value reciprocal to conditional entropy of matrix row P with weights presenting the elements of the vector π of the stationary distribution of the Markov chain.

  11. Comparative study on novel test systems to determine disintegration time of orodispersible films.

    PubMed

    Preis, Maren; Gronkowsky, Dorothee; Grytzan, Dominik; Breitkreutz, Jörg

    2014-08-01

    Orodispersible films (ODFs) are a promising innovative dosage form enabling drug administration without the need for water and minimizing danger of aspiration due to their fast disintegration in small amounts of liquid. This study focuses on the development of a disintegration test system for ODFs. Two systems were developed and investigated: one provides an electronic end-point, and the other shows a transferable setup of the existing disintegration tester for orodispersible tablets. Different ODF preparations were investigated to determine the suitability of the disintegration test systems. The use of different test media and the impact of different storage conditions of ODFs on their disintegration time were additionally investigated. The experiments showed acceptable reproducibility (low deviations within sample replicates due to a clear determination of the measurement end-point). High temperatures and high humidity affected some of the investigated ODFs, resulting in higher disintegration time or even no disintegration within the tested time period. The methods provided clear end-point detection and were applicable for different types of ODFs. By the modification of a conventional test system to enable application for films, a standard method could be presented to ensure uniformity in current quality control settings. © 2014 Royal Pharmaceutical Society.

  12. Discrete time Markov chains (DTMC) susceptible infected susceptible (SIS) epidemic model with two pathogens in two patches

    NASA Astrophysics Data System (ADS)

    Lismawati, Eka; Respatiwulan; Widyaningsih, Purnami

    2017-06-01

    The SIS epidemic model describes the pattern of disease spread with characteristics that recovered individuals can be infected more than once. The number of susceptible and infected individuals every time follows the discrete time Markov process. It can be represented by the discrete time Markov chains (DTMC) SIS. The DTMC SIS epidemic model can be developed for two pathogens in two patches. The aims of this paper are to reconstruct and to apply the DTMC SIS epidemic model with two pathogens in two patches. The model was presented as transition probabilities. The application of the model obtain that the number of susceptible individuals decreases while the number of infected individuals increases for each pathogen in each patch.

  13. Comparison of design strategies for a three-arm clinical trial with time-to-event endpoint: Power, time-to-analysis, and operational aspects.

    PubMed

    Asikanius, Elina; Rufibach, Kaspar; Bahlo, Jasmin; Bieska, Gabriele; Burger, Hans Ulrich

    2016-11-01

    To optimize resources, randomized clinical trials with multiple arms can be an attractive option to simultaneously test various treatment regimens in pharmaceutical drug development. The motivation for this work was the successful conduct and positive final outcome of a three-arm randomized clinical trial primarily assessing whether obinutuzumab plus chlorambucil in patients with chronic lympocytic lymphoma and coexisting conditions is superior to chlorambucil alone based on a time-to-event endpoint. The inference strategy of this trial was based on a closed testing procedure. We compare this strategy to three potential alternatives to run a three-arm clinical trial with a time-to-event endpoint. The primary goal is to quantify the differences between these strategies in terms of the time it takes until the first analysis and thus potential approval of a new drug, number of required events, and power. Operational aspects of implementing the various strategies are discussed. In conclusion, using a closed testing procedure results in the shortest time to the first analysis with a minimal loss in power. Therefore, closed testing procedures should be part of the statistician's standard clinical trials toolbox when planning multiarm clinical trials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  15. Markov models in dentistry: application to resin-bonded bridges and review of the literature.

    PubMed

    Mahl, Dominik; Marinello, Carlo P; Sendi, Pedram

    2012-10-01

    Markov models are mathematical models that can be used to describe disease progression and evaluate the cost-effectiveness of medical interventions. Markov models allow projecting clinical and economic outcomes into the future and are therefore frequently used to estimate long-term outcomes of medical interventions. The purpose of this paper is to demonstrate its use in dentistry, using the example of resin-bonded bridges to replace missing teeth, and to review the literature. We used literature data and a four-state Markov model to project long-term outcomes of resin-bonded bridges over a time horizon of 60 years. In addition, the literature was searched in PubMed Medline for research articles on the application of Markov models in dentistry.

  16. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    PubMed Central

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-01-01

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088

  17. Mechanistic modelling of infrared mediated energy transfer during the primary drying step of a continuous freeze-drying process.

    PubMed

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; De Meyer, Laurens; Corver, Jos; Vervaet, Chris; Nopens, Ingmar; De Beer, Thomas

    2017-05-01

    Conventional pharmaceutical freeze-drying is an inefficient and expensive batch-wise process, associated with several disadvantages leading to an uncontrolled end product variability. The proposed continuous alternative, based on spinning the vials during freezing and on optimal energy supply during drying, strongly increases process efficiency and improves product quality (uniformity). The heat transfer during continuous drying of the spin frozen vials is provided via non-contact infrared (IR) radiation. The energy transfer to the spin frozen vials should be optimised to maximise the drying efficiency while avoiding cake collapse. Therefore, a mechanistic model was developed which allows computing the optimal, dynamic IR heater temperature in function of the primary drying progress and which, hence, also allows predicting the primary drying endpoint based on the applied dynamic IR heater temperature. The model was validated by drying spin frozen vials containing the model formulation (3.9mL in 10R vials) according to the computed IR heater temperature profile. In total, 6 validation experiments were conducted. The primary drying endpoint was experimentally determined via in-line near-infrared (NIR) spectroscopy and compared with the endpoint predicted by the model (50min). The mean ratio of the experimental drying time to the predicted value was 0.91, indicating a good agreement between the model predictions and the experimental data. The end product had an elegant product appearance (visual inspection) and an acceptable residual moisture content (Karl Fischer). Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Online kinematic regulation by visual feedback for grasp versus transport during reach-to-pinch

    PubMed Central

    Nataraj, Raviraj; Pasluosta, Cristian; Li, Zong-Ming

    2014-01-01

    Purpose This study investigated novel kinematic performance parameters to understand regulation by visual feedback (VF) of the reaching hand on the grasp and transport components during the reach-to-pinch maneuver. Conventional metrics often signify discrete movement features to postulate sensory-based control effects (e.g., time for maximum velocity to signify feedback delay). The presented metrics of this study were devised to characterize relative vision-based control of the sub-movements across the entire maneuver. Methods Movement performance was assessed according to reduced variability and increased efficiency of kinematic trajectories. Variability was calculated as the standard deviation about the observed mean trajectory for a given subject and VF condition across kinematic derivatives for sub-movements of inter-pad grasp (distance between thumb and index finger-pads; relative orientation of finger-pads) and transport (distance traversed by wrist). A Markov analysis then examined the probabilistic effect of VF on which movement component exhibited higher variability over phases of the complete maneuver. Jerk-based metrics of smoothness (minimal jerk) and energy (integrated jerk-squared) were applied to indicate total movement efficiency with VF. Results/Discussion The reductions in grasp variability metrics with VF were significantly greater (p<0.05) compared to transport for velocity, acceleration, and jerk, suggesting separate control pathways for each component. The Markov analysis indicated that VF preferentially regulates grasp over transport when continuous control is modeled probabilistically during the movement. Efficiency measures demonstrated VF to be more integral for early motor planning of grasp than transport in producing greater increases in smoothness and trajectory adjustments (i.e., jerk-energy) early compared to late in the movement cycle. Conclusions These findings demonstrate the greater regulation by VF on kinematic performance of grasp compared to transport and how particular features of this relativistic control occur continually over the maneuver. Utilizing the advanced performance metrics presented in this study facilitated characterization of VF effects continuously across the entire movement in corroborating the notion of separate control pathways for each component. PMID:24968371

  19. Multiscale models and stochastic simulation methods for computing rare but key binding events in cell biology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerrier, C.; Holcman, D., E-mail: david.holcman@ens.fr; Mathematical Institute, Oxford OX2 6GG, Newton Institute

    The main difficulty in simulating diffusion processes at a molecular level in cell microdomains is due to the multiple scales involving nano- to micrometers. Few to many particles have to be simulated and simultaneously tracked while there are exploring a large portion of the space for binding small targets, such as buffers or active sites. Bridging the small and large spatial scales is achieved by rare events representing Brownian particles finding small targets and characterized by long-time distribution. These rare events are the bottleneck of numerical simulations. A naive stochastic simulation requires running many Brownian particles together, which is computationallymore » greedy and inefficient. Solving the associated partial differential equations is also difficult due to the time dependent boundary conditions, narrow passages and mixed boundary conditions at small windows. We present here two reduced modeling approaches for a fast computation of diffusing fluxes in microdomains. The first approach is based on a Markov mass-action law equations coupled to a Markov chain. The second is a Gillespie's method based on the narrow escape theory for coarse-graining the geometry of the domain into Poissonian rates. The main application concerns diffusion in cellular biology, where we compute as an example the distribution of arrival times of calcium ions to small hidden targets to trigger vesicular release.« less

  20. Considerations in choosing a primary endpoint that measures durability of virological suppression in an antiretroviral trial.

    PubMed

    Gilbert, P B; Ribaudo, H J; Greenberg, L; Yu, G; Bosch, R J; Tierney, C; Kuritzkes, D R

    2000-09-08

    At present, many clinical trials of anti-HIV-1 therapies compare treatments by a primary endpoint that measures the durability of suppression of HIV-1 replication. Several durability endpoints are compared. Endpoints are compared by their implicit assumptions regarding surrogacy for clinical outcomes, sample size requirements, and accommodations for inter-patient differences in baseline plasma HIV-1-RNA levels and in initial treatment response. Virological failure is defined by the non-suppression of virus levels at a prespecified follow-up time T(early virological failure), or by relapse. A binary virological failure endpoint is compared with three time-to-virological failure endpoints: time from (i) randomization that assigns early failures a failure time of T weeks; (ii) randomization that extends the early failure time T for slowly responding subjects; and (iii) virological response that assigns non-responders a failure time of 0 weeks. Endpoint differences are illustrated with Agouron's trial 511. In comparing high with low-dose nelfinavir (NFV) regimens in Agouron 511, the difference in Kaplan-Meier estimates of the proportion not failing by 24 weeks is 16.7% (P = 0.048), 6.5% (P = 0.29) and 22.9% (P = 0.0030) for endpoints (i), (ii) and (iii), respectively. The results differ because NFV suppresses virus more quickly at the higher dose, and the endpoints weigh this treatment difference differently. This illustrates that careful consideration needs to be given to choosing a primary endpoint that will detect treatment differences of interest. A time from randomization endpoint is usually recommended because of its advantages in flexibility and sample size, especially at interim analyses, and for its interpretation for patient management.

  1. Bootstrapping Least Squares Estimates in Biochemical Reaction Networks

    PubMed Central

    Linder, Daniel F.

    2015-01-01

    The paper proposes new computational methods of computing confidence bounds for the least squares estimates (LSEs) of rate constants in mass-action biochemical reaction network and stochastic epidemic models. Such LSEs are obtained by fitting the set of deterministic ordinary differential equations (ODEs), corresponding to the large volume limit of a reaction network, to network’s partially observed trajectory treated as a continuous-time, pure jump Markov process. In the large volume limit the LSEs are asymptotically Gaussian, but their limiting covariance structure is complicated since it is described by a set of nonlinear ODEs which are often ill-conditioned and numerically unstable. The current paper considers two bootstrap Monte-Carlo procedures, based on the diffusion and linear noise approximations for pure jump processes, which allow one to avoid solving the limiting covariance ODEs. The results are illustrated with both in-silico and real data examples from the LINE 1 gene retrotranscription model and compared with those obtained using other methods. PMID:25898769

  2. Effectiveness and cost-effectiveness of knowledge transfer and behavior modification interventions in type 2 diabetes mellitus patients--the INDICA study: a cluster randomized controlled trial.

    PubMed

    Ramallo-Fariña, Yolanda; García-Pérez, Lidia; Castilla-Rodríguez, Iván; Perestelo-Pérez, Lilisbeth; Wägner, Ana María; de Pablos-Velasco, Pedro; Domínguez, Armando Carrillo; Cortés, Mauro Boronat; Vallejo-Torres, Laura; Ramírez, Marcos Estupiñán; Martín, Pablo Pedrianes; García-Puente, Ignacio; Salinero-Fort, Miguel Ángel; Serrano-Aguilar, Pedro Guillermo

    2015-04-09

    Type 2 diabetes mellitus is a chronic disease whose health outcomes are related to patients and healthcare professionals' decision-making. The Diabetes Intervention study in the Canary Islands (INDICA study) aims to evaluate the effectiveness and cost-effectiveness of educational interventions supported by new technology decision tools for type 2 diabetes patients and primary care professionals in the Canary Islands. The INDICA study is an open, community-based, multicenter, clinical controlled trial with random allocation by clusters to one of three interventions or to usual care. The setting is primary care where physicians and nurses are invited to participate. Patients with diabetes diagnosis, 18-65 years of age, and regular users of mobile phone were randomly selected. Patients with severe comorbidities were excluded. The clusters are primary healthcare practices with enough professionals and available places to provide the intervention. The calculated sample size was 2,300 patients. Patients in group 1 are receiving an educational group program of eight sessions every 3 months led by trained nurses and monitored by means of logs and a web-based platform and tailored semi-automated SMS for continuous support. Primary care professionals in group 2 are receiving a short educational program to update their diabetes knowledge, which includes a decision support tool embedded into the electronic clinical record and a monthly feedback report of patients' results. Group 3 is receiving a combination of the interventions for patients and professionals. The primary endpoint is the change in HbA1c in 2 years. Secondary endpoints are cardiovascular risk factors, macrovascular and microvascular diabetes complications, quality of life, psychological outcomes, diabetes knowledge, and healthcare utilization. Data is being collected from interviews, questionnaires, clinical examinations, and records. Generalized linear mixed models with repeated time measurements will be used to analyze changes in outcomes. The cost-effectiveness analysis, from the healthcare services perspective, involves direct medical costs per quality-adjusted life year gained and two periods, a 'within-trial' period and a lifetime Markov model. Deterministic and probabilistic sensitivity analyses are planned. This ongoing trial aims to set up the implementation of evidence-based programs in the clinical setting for chronic patients. Clinical Trial.gov NCT01657227.

  3. Markov-switching multifractal models as another class of random-energy-like models in one-dimensional space

    NASA Astrophysics Data System (ADS)

    Saakian, David B.

    2012-03-01

    We map the Markov-switching multifractal model (MSM) onto the random energy model (REM). The MSM is, like the REM, an exactly solvable model in one-dimensional space with nontrivial correlation functions. According to our results, four different statistical physics phases are possible in random walks with multifractal behavior. We also introduce the continuous branching version of the model, calculate the moments, and prove multiscaling behavior. Different phases have different multiscaling properties.

  4. Modelling Faculty Replacement Strategies Using a Time-Dependent Finite Markov-Chain Process.

    ERIC Educational Resources Information Center

    Hackett, E. Raymond; Magg, Alexander A.; Carrigan, Sarah D.

    1999-01-01

    Describes the use of a time-dependent Markov-chain model to develop faculty-replacement strategies within a college at a research university. The study suggests that a stochastic modelling approach can provide valuable insight when planning for personnel needs in the immediate (five-to-ten year) future. (MSE)

  5. Slow diffusion by Markov random flights

    NASA Astrophysics Data System (ADS)

    Kolesnik, Alexander D.

    2018-06-01

    We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.

  6. Development of a Fault Monitoring Technique for Wind Turbines Using a Hidden Markov Model.

    PubMed

    Shin, Sung-Hwan; Kim, SangRyul; Seo, Yun-Ho

    2018-06-02

    Regular inspection for the maintenance of the wind turbines is difficult because of their remote locations. For this reason, condition monitoring systems (CMSs) are typically installed to monitor their health condition. The purpose of this study is to propose a fault detection algorithm for the mechanical parts of the wind turbine. To this end, long-term vibration data were collected over two years by a CMS installed on a 3 MW wind turbine. The vibration distribution at a specific rotating speed of main shaft is approximated by the Weibull distribution and its cumulative distribution function is utilized for determining the threshold levels that indicate impending failure of mechanical parts. A Hidden Markov model (HMM) is employed to propose the statistical fault detection algorithm in the time domain and the method whereby the input sequence for HMM is extracted is also introduced by considering the threshold levels and the correlation between the signals. Finally, it was demonstrated that the proposed HMM algorithm achieved a greater than 95% detection success rate by using the long-term signals.

  7. Hybrid stochastic simplifications for multiscale gene networks

    PubMed Central

    Crudu, Alina; Debussche, Arnaud; Radulescu, Ovidiu

    2009-01-01

    Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion [1-3] which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered) approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach. PMID:19735554

  8. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  9. Hybrid Markov-mass action law model for cell activation by rare binding events: Application to calcium induced vesicular release at neuronal synapses.

    PubMed

    Guerrier, Claire; Holcman, David

    2016-10-18

    Binding of molecules, ions or proteins to small target sites is a generic step of cell activation. This process relies on rare stochastic events where a particle located in a large bulk has to find small and often hidden targets. We present here a hybrid discrete-continuum model that takes into account a stochastic regime governed by rare events and a continuous regime in the bulk. The rare discrete binding events are modeled by a Markov chain for the encounter of small targets by few Brownian particles, for which the arrival time is Poissonian. The large ensemble of particles is described by mass action laws. We use this novel model to predict the time distribution of vesicular release at neuronal synapses. Vesicular release is triggered by the binding of few calcium ions that can originate either from the synaptic bulk or from the entry through calcium channels. We report here that the distribution of release time is bimodal although it is triggered by a single fast action potential. While the first peak follows a stimulation, the second corresponds to the random arrival over much longer time of ions located in the synaptic terminal to small binding vesicular targets. To conclude, the present multiscale stochastic modeling approach allows studying cellular events based on integrating discrete molecular events over several time scales.

  10. Inference of epidemiological parameters from household stratified data

    PubMed Central

    Walker, James N.; Ross, Joshua V.

    2017-01-01

    We consider a continuous-time Markov chain model of SIR disease dynamics with two levels of mixing. For this so-called stochastic households model, we provide two methods for inferring the model parameters—governing within-household transmission, recovery, and between-household transmission—from data of the day upon which each individual became infectious and the household in which each infection occurred, as might be available from First Few Hundred studies. Each method is a form of Bayesian Markov Chain Monte Carlo that allows us to calculate a joint posterior distribution for all parameters and hence the household reproduction number and the early growth rate of the epidemic. The first method performs exact Bayesian inference using a standard data-augmentation approach; the second performs approximate Bayesian inference based on a likelihood approximation derived from branching processes. These methods are compared for computational efficiency and posteriors from each are compared. The branching process is shown to be a good approximation and remains computationally efficient as the amount of data is increased. PMID:29045456

  11. Integrating Decision Tree and Hidden Markov Model (HMM) for Subtype Prediction of Human Influenza A Virus

    NASA Astrophysics Data System (ADS)

    Attaluri, Pavan K.; Chen, Zhengxin; Weerakoon, Aruna M.; Lu, Guoqing

    Multiple criteria decision making (MCDM) has significant impact in bioinformatics. In the research reported here, we explore the integration of decision tree (DT) and Hidden Markov Model (HMM) for subtype prediction of human influenza A virus. Infection with influenza viruses continues to be an important public health problem. Viral strains of subtype H3N2 and H1N1 circulates in humans at least twice annually. The subtype detection depends mainly on the antigenic assay, which is time-consuming and not fully accurate. We have developed a Web system for accurate subtype detection of human influenza virus sequences. The preliminary experiment showed that this system is easy-to-use and powerful in identifying human influenza subtypes. Our next step is to examine the informative positions at the protein level and extend its current functionality to detect more subtypes. The web functions can be accessed at http://glee.ist.unomaha.edu/.

  12. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  13. Modelisation de l'historique d'operation de groupes turbine-alternateur

    NASA Astrophysics Data System (ADS)

    Szczota, Mickael

    Because of their ageing fleet, the utility managers are increasingly in needs of tools that can help them to plan efficiently maintenance operations. Hydro-Quebec started a project that aim to foresee the degradation of their hydroelectric runner, and use that information to classify the generating unit. That classification will help to know which generating unit is more at risk to undergo a major failure. Cracks linked to the fatigue phenomenon are a predominant degradation mode and the loading sequences applied to the runner is a parameter impacting the crack growth. So, the aim of this memoir is to create a generator able to generate synthetic loading sequences that are statistically equivalent to the observed history. Those simulated sequences will be used as input in a life assessment model. At first, we describe how the generating units are operated by Hydro-Quebec and analyse the available data, the analysis shows that the data are non-stationnary. Then, we review modelisation and validation methods. In the following chapter a particular attention is given to a precise description of the validation and comparison procedure. Then, we present the comparison of three kind of model : Discrete Time Markov Chains, Discrete Time Semi-Markov Chains and the Moving Block Bootstrap. For the first two models, we describe how to take account for the non-stationnarity. Finally, we show that the Markov Chain is not adapted for our case, and that the Semi-Markov chains are better when they include the non-stationnarity. The final choice between Semi-Markov Chains and the Moving Block Bootstrap depends of the user. But, with a long term vision we recommend the use of Semi-Markov chains for their flexibility. Keywords: Stochastic models, Models validation, Reliability, Semi-Markov Chains, Markov Chains, Bootstrap

  14. HIPPI: highly accurate protein family classification with ensembles of HMMs.

    PubMed

    Nguyen, Nam-Phuong; Nute, Michael; Mirarab, Siavash; Warnow, Tandy

    2016-11-11

    Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  15. A rank test for bivariate time-to-event outcomes when one event is a surrogate

    PubMed Central

    Shaw, Pamela A.; Fay, Michael P.

    2016-01-01

    In many clinical settings, improving patient survival is of interest but a practical surrogate, such as time to disease progression, is instead used as a clinical trial’s primary endpoint. A time-to-first endpoint (e.g. death or disease progression) is commonly analyzed but may not be adequate to summarize patient outcomes if a subsequent event contains important additional information. We consider a surrogate outcome very generally, as one correlated with the true endpoint of interest. Settings of interest include those where the surrogate indicates a beneficial outcome so that the usual time-to-first endpoint of death or surrogate event is nonsensical. We present a new two-sample test for bivariate, interval-censored time-to-event data, where one endpoint is a surrogate for the second, less frequently observed endpoint of true interest. This test examines whether patient groups have equal clinical severity. If the true endpoint rarely occurs, the proposed test acts like a weighted logrank test on the surrogate; if it occurs for most individuals, then our test acts like a weighted logrank test on the true endpoint. If the surrogate is a useful statistical surrogate, our test can have better power than tests based on the surrogate that naively handle the true endpoint. In settings where the surrogate is not valid (treatment affects the surrogate but not the true endpoint), our test incorporates the information regarding the lack of treatment effect from the observed true endpoints and hence is expected to have a dampened treatment effect compared to tests based on the surrogate alone. PMID:27059817

  16. Estimating rare events in biochemical systems using conditional sampling.

    PubMed

    Sundar, V S

    2017-01-28

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  17. Analytical pricing formulas for hybrid variance swaps with regime-switching

    NASA Astrophysics Data System (ADS)

    Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun

    2017-11-01

    The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.

  18. Metastable Distributions of Markov Chains with Rare Transitions

    NASA Astrophysics Data System (ADS)

    Freidlin, M.; Koralov, L.

    2017-06-01

    In this paper we consider Markov chains X^\\varepsilon _t with transition rates that depend on a small parameter \\varepsilon . We are interested in the long time behavior of X^\\varepsilon _t at various \\varepsilon -dependent time scales t = t(\\varepsilon ). The asymptotic behavior depends on how the point (1/\\varepsilon , t(\\varepsilon )) approaches infinity. We introduce a general notion of complete asymptotic regularity (a certain asymptotic relation between the ratios of transition rates), which ensures the existence of the metastable distribution for each initial point and a given time scale t(\\varepsilon ). The technique of i-graphs allows one to describe the metastable distribution explicitly. The result may be viewed as a generalization of the ergodic theorem to the case of parameter-dependent Markov chains.

  19. Filtering Using Nonlinear Expectations

    DTIC Science & Technology

    2016-04-16

    gives a solution to estimating a Markov chain observed in Gaussian noise when the variance of the noise is unkown. This paper is accepted for the IEEE...Optimization, an A* journal. A short third paper discusses how to estimate a change in the transition dynamics of a noisily observed Markov chain ...The change point time is hidden in a hidden Markov chain , so a second level of discovery is involved. This paper is accepted for Communications in

  20. A nonstationary Markov transition model for computing the relative risk of dementia before death

    PubMed Central

    Yu, Lei; Griffith, William S.; Tyas, Suzanne L.; Snowdon, David A.; Kryscio, Richard J.

    2010-01-01

    This paper investigates the long-term behavior of the k-step transition probability matrix for a nonstationary discrete time Markov chain in the context of modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. The authors derive formulas for the following absorption statistics: (1) the relative risk of absorption between competing absorbing states, and (2) the mean and variance of the number of visits among the transient states before absorption. Since absorption is not guaranteed, sufficient conditions are discussed to ensure that the substochastic matrix associated with transitions among transient states converges to zero in limit. Results are illustrated with an application to the Nun Study, a cohort of 678 participants, 75 to 107 years of age, followed longitudinally with up to ten cognitive assessments over a fifteen-year period. PMID:20087848

  1. Traces of business cycles in credit-rating migrations

    PubMed Central

    Boreiko, Dmitri; Kaniovski, Serguei; Pflug, Georg

    2017-01-01

    Using migration data of a rating agency, this paper attempts to quantify the impact of macroeconomic conditions on credit-rating migrations. The migrations are modeled as a coupled Markov chain, where the macroeconomic factors are represented by unobserved tendency variables. In the simplest case, these binary random variables are static and credit-class-specific. A generalization treats tendency variables evolving as a time-homogeneous Markov chain. A more detailed analysis assumes a tendency variable for every combination of a credit class and an industry. The models are tested on a Standard and Poor’s (S&P’s) dataset. Parameters are estimated by the maximum likelihood method. According to the estimates, the investment-grade financial institutions evolve independently of the rest of the economy represented by the data. This might be an evidence of implicit too-big-to-fail bail-out guarantee policies of the regulatory authorities. PMID:28426758

  2. Traces of business cycles in credit-rating migrations.

    PubMed

    Boreiko, Dmitri; Kaniovski, Serguei; Kaniovski, Yuri; Pflug, Georg

    2017-01-01

    Using migration data of a rating agency, this paper attempts to quantify the impact of macroeconomic conditions on credit-rating migrations. The migrations are modeled as a coupled Markov chain, where the macroeconomic factors are represented by unobserved tendency variables. In the simplest case, these binary random variables are static and credit-class-specific. A generalization treats tendency variables evolving as a time-homogeneous Markov chain. A more detailed analysis assumes a tendency variable for every combination of a credit class and an industry. The models are tested on a Standard and Poor's (S&P's) dataset. Parameters are estimated by the maximum likelihood method. According to the estimates, the investment-grade financial institutions evolve independently of the rest of the economy represented by the data. This might be an evidence of implicit too-big-to-fail bail-out guarantee policies of the regulatory authorities.

  3. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  4. A reward semi-Markov process with memory for wind speed modeling

    NASA Astrophysics Data System (ADS)

    Petroni, F.; D'Amico, G.; Prattico, F.

    2012-04-01

    The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [1] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [3], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.

  5. Application of Markov chain theory to ASTP natural environment launch criteria at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Graves, M. E.; Perlmutter, M.

    1974-01-01

    To aid the planning of the Apollo Soyuz Test Program (ASTP), certain natural environment statistical relationships are presented, based on Markov theory and empirical counts. The practical results are in terms of conditional probability of favorable and unfavorable launch conditions at Kennedy Space Center (KSC). They are based upon 15 years of recorded weather data which are analyzed under a set of natural environmental launch constraints. Three specific forecasting problems were treated: (1) the length of record of past weather which is useful to a prediction; (2) the effect of persistence in runs of favorable and unfavorable conditions; and (3) the forecasting of future weather in probabilistic terms.

  6. A semi-Markov model for mitosis segmentation in time-lapse phase contrast microscopy image sequences of stem cell populations.

    PubMed

    Liu, An-An; Li, Kang; Kanade, Takeo

    2012-02-01

    We propose a semi-Markov model trained in a max-margin learning framework for mitosis event segmentation in large-scale time-lapse phase contrast microscopy image sequences of stem cell populations. Our method consists of three steps. First, we apply a constrained optimization based microscopy image segmentation method that exploits phase contrast optics to extract candidate subsequences in the input image sequence that contains mitosis events. Then, we apply a max-margin hidden conditional random field (MM-HCRF) classifier learned from human-annotated mitotic and nonmitotic sequences to classify each candidate subsequence as a mitosis or not. Finally, a max-margin semi-Markov model (MM-SMM) trained on manually-segmented mitotic sequences is utilized to reinforce the mitosis classification results, and to further segment each mitosis into four predefined temporal stages. The proposed method outperforms the event-detection CRF model recently reported by Huh as well as several other competing methods in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells. For mitosis detection, an overall precision of 95.8% and a recall of 88.1% were achieved. For mitosis segmentation, the mean and standard deviation for the localization errors of the start and end points of all mitosis stages were well below 1 and 2 frames, respectively. In particular, an overall temporal location error of 0.73 ± 1.29 frames was achieved for locating daughter cell birth events.

  7. The Salford Lung Study protocol: a pragmatic, randomised phase III real-world effectiveness trial in chronic obstructive pulmonary disease.

    PubMed

    Bakerly, Nawar Diar; Woodcock, Ashley; New, John P; Gibson, J Martin; Wu, Wei; Leather, David; Vestbo, Jørgen

    2015-09-04

    New treatments need to be evaluated in real-world clinical practice to account for co-morbidities, adherence and polypharmacy. Patients with chronic obstructive pulmonary disease (COPD), ≥ 40 years old, with exacerbation in the previous 3 years are randomised 1:1 to once-daily fluticasone furoate 100 μg/vilanterol 25 μg in a novel dry-powder inhaler versus continuing their existing therapy. The primary endpoint is the mean annual rate of COPD exacerbations; an electronic medical record allows real-time collection and monitoring of endpoint and safety data. The Salford Lung Study is the world's first pragmatic randomised controlled trial of a pre-licensed medication in COPD. Clinicaltrials.gov identifier NCT01551758.

  8. Markov modeling for the neurosurgeon: a review of the literature and an introduction to cost-effectiveness research.

    PubMed

    Wali, Arvin R; Brandel, Michael G; Santiago-Dieppa, David R; Rennert, Robert C; Steinberg, Jeffrey A; Hirshman, Brian R; Murphy, James D; Khalessi, Alexander A

    2018-05-01

    OBJECTIVE Markov modeling is a clinical research technique that allows competing medical strategies to be mathematically assessed in order to identify the optimal allocation of health care resources. The authors present a review of the recently published neurosurgical literature that employs Markov modeling and provide a conceptual framework with which to evaluate, critique, and apply the findings generated from health economics research. METHODS The PubMed online database was searched to identify neurosurgical literature published from January 2010 to December 2017 that had utilized Markov modeling for neurosurgical cost-effectiveness studies. Included articles were then assessed with regard to year of publication, subspecialty of neurosurgery, decision analytical techniques utilized, and source information for model inputs. RESULTS A total of 55 articles utilizing Markov models were identified across a broad range of neurosurgical subspecialties. Sixty-five percent of the papers were published within the past 3 years alone. The majority of models derived health transition probabilities, health utilities, and cost information from previously published studies or publicly available information. Only 62% of the studies incorporated indirect costs. Ninety-three percent of the studies performed a 1-way or 2-way sensitivity analysis, and 67% performed a probabilistic sensitivity analysis. A review of the conceptual framework of Markov modeling and an explanation of the different terminology and methodology are provided. CONCLUSIONS As neurosurgeons continue to innovate and identify novel treatment strategies for patients, Markov modeling will allow for better characterization of the impact of these interventions on a patient and societal level. The aim of this work is to equip the neurosurgical readership with the tools to better understand, critique, and apply findings produced from cost-effectiveness research.

  9. Dynamic functional connectivity using state-based dynamic community structure: method and application to opioid analgesia.

    PubMed

    Robinson, Lucy F; Atlas, Lauren Y; Wager, Tor D

    2015-03-01

    We present a new method, State-based Dynamic Community Structure, that detects time-dependent community structure in networks of brain regions. Most analyses of functional connectivity assume that network behavior is static in time, or differs between task conditions with known timing. Our goal is to determine whether brain network topology remains stationary over time, or if changes in network organization occur at unknown time points. Changes in network organization may be related to shifts in neurological state, such as those associated with learning, drug uptake or experimental conditions. Using a hidden Markov stochastic blockmodel, we define a time-dependent community structure. We apply this approach to data from a functional magnetic resonance imaging experiment examining how contextual factors influence drug-induced analgesia. Results reveal that networks involved in pain, working memory, and emotion show distinct profiles of time-varying connectivity. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Observation uncertainty in reversible Markov chains.

    PubMed

    Metzner, Philipp; Weber, Marcus; Schütte, Christof

    2010-09-01

    In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .

  11. Offline decoding of end-point forces using neural ensembles: application to a brain-machine interface.

    PubMed

    Gupta, Rahul; Ashe, James

    2009-06-01

    Brain-machine interfaces (BMIs) hold a lot of promise for restoring some level of motor function to patients with neuronal disease or injury. Current BMI approaches fall into two broad categories--those that decode discrete properties of limb movement (such as movement direction and movement intent) and those that decode continuous variables (such as position and velocity). However, to enable the prosthetic devices to be useful for common everyday tasks, precise control of the forces applied by the end-point of the prosthesis (e.g., the hand) is also essential. Here, we used linear regression and Kalman filter methods to show that neural activity recorded from the motor cortex of the monkey during movements in a force field can be used to decode the end-point forces applied by the subject successfully and with high fidelity. Furthermore, the models exhibit some generalization to novel task conditions. We also demonstrate how the simultaneous prediction of kinematics and kinetics can be easily achieved using the same framework, without any degradation in decoding quality. Our results represent a useful extension of the current BMI technology, making dynamic control of a prosthetic device a distinct possibility in the near future.

  12. Statistical evaluation of surrogate endpoints with examples from cancer clinical trials.

    PubMed

    Buyse, Marc; Molenberghs, Geert; Paoletti, Xavier; Oba, Koji; Alonso, Ariel; Van der Elst, Wim; Burzykowski, Tomasz

    2016-01-01

    A surrogate endpoint is intended to replace a clinical endpoint for the evaluation of new treatments when it can be measured more cheaply, more conveniently, more frequently, or earlier than that clinical endpoint. A surrogate endpoint is expected to predict clinical benefit, harm, or lack of these. Besides the biological plausibility of a surrogate, a quantitative assessment of the strength of evidence for surrogacy requires the demonstration of the prognostic value of the surrogate for the clinical outcome, and evidence that treatment effects on the surrogate reliably predict treatment effects on the clinical outcome. We focus on these two conditions, and outline the statistical approaches that have been proposed to assess the extent to which these conditions are fulfilled. When data are available from a single trial, one can assess the "individual level association" between the surrogate and the true endpoint. When data are available from several trials, one can additionally assess the "trial level association" between the treatment effect on the surrogate and the treatment effect on the true endpoint. In the latter case, the "surrogate threshold effect" can be estimated as the minimum effect on the surrogate endpoint that predicts a statistically significant effect on the clinical endpoint. All these concepts are discussed in the context of randomized clinical trials in oncology, and illustrated with two meta-analyses in gastric cancer. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Mechanistic modeling of insecticide risks to breeding birds in North American agroecosystems

    PubMed Central

    Garber, Kristina; Odenkirchen, Edward

    2017-01-01

    Insecticide usage in the United States is ubiquitous in urban, suburban, and rural environments. There is accumulating evidence that insecticides adversely affect non-target wildlife species, including birds, causing mortality, reproductive impairment, and indirect effects through loss of prey base, and the type and magnitude of such effects differs by chemical class, or mode of action. In evaluating data for an insecticide registration application and for registration review, scientists at the United States Environmental Protection Agency (USEPA) assess the fate of the insecticide and the risk the insecticide poses to the environment and non-target wildlife. Current USEPA risk assessments for pesticides generally rely on endpoints from laboratory based toxicity studies focused on groups of individuals and do not directly assess population-level endpoints. In this paper, we present a mechanistic model, which allows risk assessors to estimate the effects of insecticide exposure on the survival and seasonal productivity of birds known to forage in agricultural fields during their breeding season. This model relies on individual-based toxicity data and translates effects into endpoints meaningful at the population level (i.e., magnitude of mortality and reproductive impairment). The model was created from two existing USEPA avian risk assessment models, the Terrestrial Investigation Model (TIM v.3.0) and the Markov Chain Nest Productivity model (MCnest). The integrated TIM/MCnest model was used to assess the relative risk of 12 insecticides applied via aerial spray to control corn pests on a suite of 31 avian species known to forage in cornfields in agroecosystems of the Midwest, USA. We found extensive differences in risk to birds among insecticides, with chlorpyrifos and malathion (organophosphates) generally posing the greatest risk, and bifenthrin and λ-cyhalothrin (pyrethroids) posing the least risk. Comparative sensitivity analysis across the 31 species showed that ecological trait parameters related to the timing of breeding and reproductive output per nest attempt offered the greatest explanatory power for predicting the magnitude of risk. An important advantage of TIM/MCnest is that it allows risk assessors to rationally combine both acute (lethal) and chronic (reproductive) effects into a single unified measure of risk. PMID:28467479

  14. Mechanistic modeling of insecticide risks to breeding birds in North American agroecosystems.

    PubMed

    Etterson, Matthew; Garber, Kristina; Odenkirchen, Edward

    2017-01-01

    Insecticide usage in the United States is ubiquitous in urban, suburban, and rural environments. There is accumulating evidence that insecticides adversely affect non-target wildlife species, including birds, causing mortality, reproductive impairment, and indirect effects through loss of prey base, and the type and magnitude of such effects differs by chemical class, or mode of action. In evaluating data for an insecticide registration application and for registration review, scientists at the United States Environmental Protection Agency (USEPA) assess the fate of the insecticide and the risk the insecticide poses to the environment and non-target wildlife. Current USEPA risk assessments for pesticides generally rely on endpoints from laboratory based toxicity studies focused on groups of individuals and do not directly assess population-level endpoints. In this paper, we present a mechanistic model, which allows risk assessors to estimate the effects of insecticide exposure on the survival and seasonal productivity of birds known to forage in agricultural fields during their breeding season. This model relies on individual-based toxicity data and translates effects into endpoints meaningful at the population level (i.e., magnitude of mortality and reproductive impairment). The model was created from two existing USEPA avian risk assessment models, the Terrestrial Investigation Model (TIM v.3.0) and the Markov Chain Nest Productivity model (MCnest). The integrated TIM/MCnest model was used to assess the relative risk of 12 insecticides applied via aerial spray to control corn pests on a suite of 31 avian species known to forage in cornfields in agroecosystems of the Midwest, USA. We found extensive differences in risk to birds among insecticides, with chlorpyrifos and malathion (organophosphates) generally posing the greatest risk, and bifenthrin and λ-cyhalothrin (pyrethroids) posing the least risk. Comparative sensitivity analysis across the 31 species showed that ecological trait parameters related to the timing of breeding and reproductive output per nest attempt offered the greatest explanatory power for predicting the magnitude of risk. An important advantage of TIM/MCnest is that it allows risk assessors to rationally combine both acute (lethal) and chronic (reproductive) effects into a single unified measure of risk.

  15. 40 CFR 68.22 - Offsite consequence analysis parameters.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 15 2010-07-01 2010-07-01 false Offsite consequence analysis... PROGRAMS (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Hazard Assessment § 68.22 Offsite consequence analysis parameters. (a) Endpoints. For analyses of offsite consequences, the following endpoints shall be...

  16. A hidden Markov model for decoding and the analysis of replay in spike trains.

    PubMed

    Box, Marc; Jones, Matt W; Whiteley, Nick

    2016-12-01

    We present a hidden Markov model that describes variation in an animal's position associated with varying levels of activity in action potential spike trains of individual place cell neurons. The model incorporates a coarse-graining of position, which we find to be a more parsimonious description of the system than other models. We use a sequential Monte Carlo algorithm for Bayesian inference of model parameters, including the state space dimension, and we explain how to estimate position from spike train observations (decoding). We obtain greater accuracy over other methods in the conditions of high temporal resolution and small neuronal sample size. We also present a novel, model-based approach to the study of replay: the expression of spike train activity related to behaviour during times of motionlessness or sleep, thought to be integral to the consolidation of long-term memories. We demonstrate how we can detect the time, information content and compression rate of replay events in simulated and real hippocampal data recorded from rats in two different environments, and verify the correlation between the times of detected replay events and of sharp wave/ripples in the local field potential.

  17. Trends in Utilization of Surrogate Endpoints in Contemporary Cardiovascular Clinical Trials.

    PubMed

    Patel, Ravi B; Vaduganathan, Muthiah; Samman-Tahhan, Ayman; Kalogeropoulos, Andreas P; Georgiopoulou, Vasiliki V; Fonarow, Gregg C; Gheorghiade, Mihai; Butler, Javed

    2016-06-01

    Surrogate endpoints facilitate trial efficiency but are variably linked to clinical outcomes, and limited data are available exploring their utilization in cardiovascular clinical trials over time. We abstracted data regarding primary clinical, intermediate, and surrogate endpoints from all phase II to IV cardiovascular clinical trials from 2001 to 2012 published in the 8 highest Web of Science impact factor journals. Two investigators independently classified the type of primary endpoint. Of the 1,224 trials evaluated, 677 (55.3%) primary endpoints were clinical, 165 (13.5%) intermediate, and 382 (31.2%) surrogate. The relative proportions of these endpoints remained constant over time (p = 0.98). Trials using surrogate endpoints were smaller (187 vs 1,028 patients) and enrolled patients more expeditiously (1.4 vs 0.9 patients per site per month) compared with trials using clinical endpoints (p <0.001 for both comparisons). Surrogate endpoint trials were independently more likely to meet their primary endpoint compared to trials with clinical endpoints (adjusted odds ratio 1.56, 95% CI 1.05 to 2.34; p = 0.03). Rates of positive results in clinical endpoint trials have decreased over time from 66.1% in 2001 to 2003 to 47.2% in 2010 to 2012 (p = 0.001), whereas these rates have remained stable over the same period for surrogate (72.0% to 69.3%, p = 0.27) and intermediate endpoints (74.4% to 71.4%, p = 0.98). In conclusion, approximately a third of contemporary cardiovascular trials use surrogate endpoints. These trials are completed more expeditiously and are more likely to meet their primary outcomes. The overall scientific contribution of these surrogate endpoint trials requires further attention given their variable association with definitive outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    PubMed

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  19. Semi-Markov Approach to the Shipping Safety Modelling

    NASA Astrophysics Data System (ADS)

    Guze, Sambor; Smolarek, Leszek

    2012-02-01

    In the paper the navigational safety model of a ship on the open area has been studied under conditions of incomplete information. Moreover the structure of semi-Markov processes is used to analyse the stochastic ship safety according to the subjective acceptance of risk by the navigator. In addition, the navigator’s behaviour can be analysed by using the numerical simulation to estimate the probability of collision in the safety model.

  20. The detection of financial crisis using combination of volatility and markov switching models based on real output, domestic credit per GDP, and ICI indicators

    NASA Astrophysics Data System (ADS)

    Sugiyanto; Zukhronah, Etik; Setianingrum, Meganisa

    2018-05-01

    Open economic system has not only provided ease for every country to interact with each other, but also make it easier to transmitted the crisis. Financial crisis that hit Indonesia in 1997-1998 and 2008 severely impacted the economy, thus a method to detect crisis is required. According to Kamisky et al. [6], crisis can be detected based on several financial indicators such as real output, domestic credit per Gross Domestic Product (GDP), and Indonesia Composite Index (ICI). This research aims to determine the appropriate combination of volatility and Markov switching model to detect financial crisis in Indonesia based on the indicators. Volatility model used for modeling the unconstant-variance of ARMA. Markov switching is an alternative model of time series data with changed conditions in the data, or called state. In this research, we are using three assumption of states namely low volatility state, medium volatility state and high volatility state. The data of each indicator were taken from 1990 until 2016. The result of the study show that MS-ARCH(3,1) can be used to detect the financial crisis that hit Indonesia in 1997-1998 and 2008 based on real output, domestic credit per GDP, and ICI indicators.

  1. [Application of Markov model in post-marketing pharmacoeconomic evaluation of traditional Chinese medicine].

    PubMed

    Wang, Xin; Su, Xia; Sun, Wentao; Xie, Yanming; Wang, Yongyan

    2011-10-01

    In post-marketing study of traditional Chinese medicine (TCM), pharmacoeconomic evaluation has an important applied significance. However, the economic literatures of TCM have been unable to fully and accurately reflect the unique overall outcomes of treatment with TCM. For the special nature of TCM itself, we recommend that Markov model could be introduced into post-marketing pharmacoeconomic evaluation of TCM, and also explore the feasibility of model application. Markov model can extrapolate the study time horizon, suit with effectiveness indicators of TCM, and provide measurable comprehensive outcome. In addition, Markov model can promote the development of TCM quality of life scale and the methodology of post-marketing pharmacoeconomic evaluation.

  2. Developing a Markov Model for Forecasting End Strength of Selected Marine Corps Reserve (SMCR) Officers

    DTIC Science & Technology

    2013-03-01

    moving average ( ARIMA ) model because the data is not a times series. The best a manpower planner can do at this point is to make an educated assumption...MARKOV MODEL FOR FORECASTING END STRENGTH OF SELECTED MARINE CORPS RESERVE (SMCR) OFFICERS by Anthony D. Licari March 2013 Thesis Advisor...March 2013 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE DEVELOPING A MARKOV MODEL FOR FORECASTING END STRENGTH OF

  3. Response inhibition predicts poor antidepressant treatment response in very old depressed patients.

    PubMed

    Sneed, Joel R; Roose, Steven P; Keilp, John G; Krishnan, K Ranga Rama; Alexopoulos, George S; Sackeim, Harold A

    2007-07-01

    There have been mixed findings regarding the prognostic significance of age of onset, executive dysfunction, and hyperintensity burden on treatment outcome in late-life depression. Growth curve models were fit to data from the only 8-week, double-blind, placebo controlled trial of citalopram (20-40 mg/day) in patients aged 75 years and older with unipolar depression. Baseline assessment included Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) (to determine age at onset), Stroop Color-Word Test (to assess the response inhibition component of execution dysfunction), and structural magnetic resonance imaging (to determine hyperintensity burden). In the citalopram condition, patients with response inhibition (most impaired quartile) scored higher at endpoint than those without response inhibition. There were no effects for age of onset or hyperintensity load on response in the citalopram condition. In the placebo condition, patients with early-onset depression had higher depression scores at endpoint than patients with late-onset depression. Only response inhibition, a fundamental executive function, predicted poor treatment response to antidepressant medication. Although patients with response inhibition also showed deficits in reaction time, adjusting for reaction time in our final response inhibition model did not substantively change the findings.

  4. Controlled pattern imputation for sensitivity analysis of longitudinal binary and ordinal outcomes with nonignorable dropout.

    PubMed

    Tang, Yongqiang

    2018-04-30

    The controlled imputation method refers to a class of pattern mixture models that have been commonly used as sensitivity analyses of longitudinal clinical trials with nonignorable dropout in recent years. These pattern mixture models assume that participants in the experimental arm after dropout have similar response profiles to the control participants or have worse outcomes than otherwise similar participants who remain on the experimental treatment. In spite of its popularity, the controlled imputation has not been formally developed for longitudinal binary and ordinal outcomes partially due to the lack of a natural multivariate distribution for such endpoints. In this paper, we propose 2 approaches for implementing the controlled imputation for binary and ordinal data based respectively on the sequential logistic regression and the multivariate probit model. Efficient Markov chain Monte Carlo algorithms are developed for missing data imputation by using the monotone data augmentation technique for the sequential logistic regression and a parameter-expanded monotone data augmentation scheme for the multivariate probit model. We assess the performance of the proposed procedures by simulation and the analysis of a schizophrenia clinical trial and compare them with the fully conditional specification, last observation carried forward, and baseline observation carried forward imputation methods. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Classification of customer lifetime value models using Markov chain

    NASA Astrophysics Data System (ADS)

    Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi

    2017-10-01

    A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.

  6. Eating Quality Traits of Hanwoo longissimus dorsi Muscle as a Function of End-Point Cooking Temperature

    PubMed Central

    2016-01-01

    Interaction between carcass quality grade and end-point cooking temperature on eating quality of Hanwoo m. longissimus was investigated. Ten (10) of steers were sampled from a commercial population; carcasses with QG 1++ (n=5) and QG 1 (n=5) were chosen. Samples were cooked by electric oven at 60 or 82℃ and compared with uncooked control samples. The pH was not affected by cooking temperature but decreased the redness after cooking and steaks cooked at 60℃ were more reddish than steaks cooked at 82℃ in both QG groups. Higher cooking temperature greatly (p<0.05) increased the cooking loss, but there was no significant interaction between cooking temperature and QG on the cooking loss. Moisture is negatively correlated with temperature in both QG while the proportionate relationship between crude fat and end-point temperature found in QG 1++. WBSF values were significantly (p<0.05) high for QG 1, while that was significantly (p<0.05) increased when the temperature continues to increase. The increasing quality grade of beef resulted in significant higher (p<0.01) level of TBARS and cooking temperature increased TBARS content. Fatty acid composition was not altered by cooking at both temperatures and also the amount of fat intake was not changed. The current study indicates that eating quality of beef m. longissimus was greatly influenced by end-point temperature being interacted with QG. However, the amount and composition of fat were stable regardless of end-point temperatures. These results will provide a consumer reference to determine cooking conditions and intramuscular fat content. PMID:27433099

  7. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  8. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    ERIC Educational Resources Information Center

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  9. Real-time network security situation visualization and threat assessment based on semi-Markov process

    NASA Astrophysics Data System (ADS)

    Chen, Junhua

    2013-03-01

    To cope with a large amount of data in current sensed environments, decision aid tools should provide their understanding of situations in a time-efficient manner, so there is an increasing need for real-time network security situation awareness and threat assessment. In this study, the state transition model of vulnerability in the network based on semi-Markov process is proposed at first. Once events are triggered by an attacker's action or system response, the current states of the vulnerabilities are known. Then we calculate the transition probabilities of the vulnerability from the current state to security failure state. Furthermore in order to improve accuracy of our algorithms, we adjust the probabilities that they exploit the vulnerability according to the attacker's skill level. In the light of the preconditions and post-conditions of vulnerabilities in the network, attack graph is built to visualize security situation in real time. Subsequently, we predict attack path, recognize attack intention and estimate the impact through analysis of attack graph. These help administrators to insight into intrusion steps, determine security state and assess threat. Finally testing in a network shows that this method is reasonable and feasible, and can undertake tremendous analysis task to facilitate administrators' work.

  10. Comparison of treatment effect sizes from pivotal and postapproval trials of novel therapeutics approved by the FDA based on surrogate markers of disease: a meta-epidemiological study.

    PubMed

    Wallach, Joshua D; Ciani, Oriana; Pease, Alison M; Gonsalves, Gregg S; Krumholz, Harlan M; Taylor, Rod S; Ross, Joseph S

    2018-03-21

    The U.S. Food and Drug Administration (FDA) often approves new drugs based on trials that use surrogate markers for endpoints, which involve certain trade-offs and may risk making erroneous inferences about the medical product's actual clinical effect. This study aims to compare the treatment effects among pivotal trials supporting FDA approval of novel therapeutics based on surrogate markers of disease with those observed among postapproval trials for the same indication. We searched Drugs@FDA and PubMed to identify published randomized superiority design pivotal trials for all novel drugs initially approved by the FDA between 2005 and 2012 based on surrogate markers as primary endpoints and published postapproval trials using the same surrogate markers or patient-relevant outcomes as endpoints. Summary ratio of odds ratios (RORs) and difference between standardized mean differences (dSMDs) were used to quantify the average difference in treatment effects between pivotal and matched postapproval trials. Between 2005 and 2012, the FDA approved 88 novel drugs for 90 indications based on one or multiple pivotal trials using surrogate markers of disease. Of these, 27 novel drugs for 27 indications were approved based on pivotal trials using surrogate markers as primary endpoints that could be matched to at least one postapproval trial, for a total of 43 matches. For nine (75.0%) of the 12 matches using the same non-continuous surrogate markers as trial endpoints, pivotal trials had larger treatment effects than postapproval trials. On average, treatment effects were 50% higher (more beneficial) in the pivotal than the postapproval trials (ROR 1.5; 95% confidence interval CI 1.01-2.23). For 17 (54.8%) of the 31 matches using the same continuous surrogate markers as trial endpoints, pivotal trials had larger treatment effects than the postapproval trials. On average, there was no difference in treatment effects between pivotal and postapproval trials (dSMDs 0.01; 95% CI -0.15-0.16). Many postapproval drug trials are not directly comparable to previously published pivotal trials, particularly with respect to endpoint selection. Although treatment effects from pivotal trials supporting FDA approval of novel therapeutics based on non-continuous surrogate markers of disease are often larger than those observed among postapproval trials using surrogate markers as trial endpoints, there is no evidence of difference between pivotal and postapproval trials using continuous surrogate markers.

  11. Effect of radiofrequency radiation in cultured mammalian cells: A review.

    PubMed

    Manna, Debashri; Ghosh, Rita

    2016-01-01

    The use of mobile phone related technologies will continue to increase in the foreseeable future worldwide. This has drawn attention to the probable interaction of radiofrequency electromagnetic radiation with different biological targets. Studies have been conducted on various organisms to evaluate the alleged ill-effect on health. We have therefore attempted to review those work limited to in vitro cultured cells where irradiation conditions were well controlled. Different investigators have studied varied endpoints like DNA damage, cell cycle arrest, reactive oxygen species (ROS) formation, cellular morphology and viability to weigh the genotoxic effect of such radiation by utilizing different frequencies and dose rates under various irradiation conditions that include continuous or pulsed exposures and also amplitude- or frequency-modulated waves. Cells adapt to change in their intra and extracellular environment from different chemical and physical stimuli through organized alterations in gene or protein expression that result in the induction of stress responses. Many studies have focused on such effects for risk estimations. Though the effects of microwave radiation on cells are often not pronounced, some investigators have therefore combined radiofrequency radiation with other physical or chemical agents to observe whether the effects of such agents were augmented or not. Such reports in cultured cellular systems have also included in this review. The findings from different workers have revealed that, effects were dependent on cell type and the endpoint selection. However, contradictory findings were also observed in same cell types with same assay, in such cases the specific absorption rate (SAR) values were significant.

  12. Time-patterns of antibiotic exposure in poultry production--a Markov chains exploratory study of nature and consequences.

    PubMed

    Chauvin, C; Clement, C; Bruneau, M; Pommeret, D

    2007-07-16

    This article describes the use of Markov chains to explore the time-patterns of antimicrobial exposure in broiler poultry. The transition in antimicrobial exposure status (exposed/not exposed to an antimicrobial, with a distinction between exposures to the different antimicrobial classes) in extensive data collected in broiler chicken flocks from November 2003 onwards, was investigated. All Markov chains were first-order chains. Mortality rate, geographical location and slaughter semester were sources of heterogeneity between transition matrices. Transitions towards a 'no antimicrobial' exposure state were highly predominant, whatever the initial state. From a 'no antimicrobial' exposure state, the transition to beta-lactams was predominant among transitions to an antimicrobial exposure state. Transitions between antimicrobial classes were rare and variable. Switches between antimicrobial classes and repeats of a particular class were both observed. Application of Markov chains analysis to the database of the nation-wide antimicrobial resistance monitoring programme pointed out that transition probabilities between antimicrobial exposure states increased with the number of resistances in Escherichia coli strains.

  13. Availability analysis of mechanical systems with condition-based maintenance using semi-Markov and evaluation of optimal condition monitoring interval

    NASA Astrophysics Data System (ADS)

    Kumar, Girish; Jain, Vipul; Gandhi, O. P.

    2018-03-01

    Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.

  14. Global exponential stability of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays.

    PubMed

    Huang, Haiying; Du, Qiaosheng; Kang, Xibing

    2013-11-01

    In this paper, a class of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays is investigated. The jumping parameters are modeled as a continuous-time finite-state Markov chain. At first, the existence of equilibrium point for the addressed neural networks is studied. By utilizing the Lyapunov stability theory, stochastic analysis theory and linear matrix inequality (LMI) technique, new delay-dependent stability criteria are presented in terms of linear matrix inequalities to guarantee the neural networks to be globally exponentially stable in the mean square. Numerical simulations are carried out to illustrate the main results. © 2013 ISA. Published by ISA. All rights reserved.

  15. Global dynamics of a stochastic neuronal oscillator

    NASA Astrophysics Data System (ADS)

    Yamanobe, Takanobu

    2013-11-01

    Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.

  16. Global dynamics of a stochastic neuronal oscillator.

    PubMed

    Yamanobe, Takanobu

    2013-11-01

    Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.

  17. An individual-based approach to SIR epidemics in contact networks.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2011-08-21

    Many approaches have recently been proposed to model the spread of epidemics on networks. For instance, the Susceptible/Infected/Recovered (SIR) compartmental model has successfully been applied to different types of diseases that spread out among humans and animals. When this model is applied on a contact network, the centrality characteristics of the network plays an important role in the spreading process. However, current approaches only consider an aggregate representation of the network structure, which can result in inaccurate analysis. In this paper, we propose a new individual-based SIR approach, which considers the whole description of the network structure. The individual-based approach is built on a continuous time Markov chain, and it is capable of evaluating the state probability for every individual in the network. Through mathematical analysis, we rigorously confirm the existence of an epidemic threshold below which an epidemic does not propagate in the network. We also show that the epidemic threshold is inversely proportional to the maximum eigenvalue of the network. Additionally, we study the role of the whole spectrum of the network, and determine the relationship between the maximum number of infected individuals and the set of eigenvalues and eigenvectors. To validate our approach, we analytically study the deviation with respect to the continuous time Markov chain model, and we show that the new approach is accurate for a large range of infection strength. Furthermore, we compare the new approach with the well-known heterogeneous mean field approach in the literature. Ultimately, we support our theoretical results through extensive numerical evaluations and Monte Carlo simulations. Published by Elsevier Ltd.

  18. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    NASA Astrophysics Data System (ADS)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  19. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  20. Surrogate endpoints in randomized cardiovascular clinical trials.

    PubMed

    Domanski, Michael; Pocock, Stuart; Bernaud, Corine; Borer, Jeffrey; Geller, Nancy; Revkin, James; Zannad, Faiez

    2011-08-01

    Surrogate endpoints predict the occurrence and timing of a clinical endpoint of interest (CEI). Substitution of a surrogate endpoint for a CEI can dramatically reduce the time and cost necessary to complete a Phase III clinical trial. However, assurance that use of a surrogate endpoint will result in a correct conclusion regarding treatment effect on a CEI requires prior rigorous validation of the surrogate. Surrogate endpoints can also be of substantial use in Phase I and II studies to assess whether the intended therapeutic pathway is operative, thus providing assurance regarding the reasonableness of proceeding to a Phase III trial. This paper discusses the uses and validation of surrogate endpoints. © 2010 The Authors Fundamental and Clinical Pharmacology © 2010 Société Française de Pharmacologie et de Thérapeutique.

  1. Transition path theory analysis of c-Src kinase activation

    PubMed Central

    Meng, Yilin; Shukla, Diwakar; Pande, Vijay S.; Roux, Benoît

    2016-01-01

    Nonreceptor tyrosine kinases of the Src family are large multidomain allosteric proteins that are crucial to cellular signaling pathways. In a previous study, we generated a Markov state model (MSM) to simulate the activation of c-Src catalytic domain, used as a prototypical tyrosine kinase. The long-time kinetics of transition predicted by the MSM was in agreement with experimental observations. In the present study, we apply the framework of transition path theory (TPT) to the previously constructed MSM to characterize the main features of the activation pathway. The analysis indicates that the activating transition, in which the activation loop first opens up followed by an inward rotation of the αC-helix, takes place via a dense set of intermediate microstates distributed within a fairly broad “transition tube” in a multidimensional conformational subspace connecting the two end-point conformations. Multiple microstates with negligible equilibrium probabilities carry a large transition flux associated with the activating transition, which explains why extensive conformational sampling is necessary to accurately determine the kinetics of activation. Our results suggest that the combination of MSM with TPT provides an effective framework to represent conformational transitions in complex biomolecular systems. PMID:27482115

  2. Partial splenic embolization to permit continuation of systemic chemotherapy.

    PubMed

    Luz, Jose Hugo M; Luz, Paula M; Marchiori, Edson; Rodrigues, Leonardo A; Gouveia, Hugo R; Martin, Henrique S; Faria, Igor M; Souza, Roberto R; Gil, Roberto de Almeida; Palladino, Alexandre de M; Pimenta, Karina B; de Souza, Henrique S

    2016-10-01

    Systemic chemotherapy treatments, commonly those that comprise oxaliplatin, have been linked to the appearance of distinctive liver lesions that evolves to portal hypertension, spleen enlargement, platelets sequestration, and thrombocytopenia. This outcome can interrupt treatment or force dosage reduction, decreasing efficiency of cancer therapy. We conducted a prospective phase II study for the evaluation of partial splenic embolization in patients with thrombocytopenia that impeded systemic chemotherapy continuation. From August 2014 through July 2015, 33 patients underwent partial splenic embolization to increase platelets count and allow their return to treatment. Primary endpoint was the accomplishment of a thrombocyte level superior to 130 × 10 9 /L and the secondary endpoints were the return to chemotherapy and toxicity. Partial splenic embolization was done 36 times in 33 patients. All patients presented gastrointestinal cancer and colorectal malignancy was the commonest primary site. An average of 6.4 cycles of chemotherapy was done before splenic embolization and the most common regimen was Folfox. Mean platelet count prior to embolization was 69 × 10 9 /L. A total of 94% of patients achieved primary endpoint. All patients in need reinitiated treatment and median time to chemotherapy return was 14 days. No grade 3 or above adverse events were identified. Aiming for a 50% to 70% infarction area may be sufficient to achieve success without the complications associated with more extensive infarction. Combined with the better safety profile, partial splenic embolization is an excellent option in the management of thrombocytopenia, enabling the resumption of systemic chemotherapy with minimal procedure-related morbidity. © 2016 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  3. Statistical analysis of flight times for space shuttle ferry flights

    NASA Technical Reports Server (NTRS)

    Graves, M. E.; Perlmutter, M.

    1974-01-01

    Markov chain and Monte Carlo analysis techniques are applied to the simulated Space Shuttle Orbiter Ferry flights to obtain statistical distributions of flight time duration between Edwards Air Force Base and Kennedy Space Center. The two methods are compared, and are found to be in excellent agreement. The flights are subjected to certain operational and meteorological requirements, or constraints, which cause eastbound and westbound trips to yield different results. Persistence of events theory is applied to the occurrence of inclement conditions to find their effect upon the statistical flight time distribution. In a sensitivity test, some of the constraints are varied to observe the corresponding changes in the results.

  4. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    PubMed

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  5. Explicit-Duration Hidden Markov Model Inference of UP-DOWN States from Continuous Signals

    PubMed Central

    McFarland, James M.; Hahn, Thomas T. G.; Mehta, Mayank R.

    2011-01-01

    Neocortical neurons show UP-DOWN state (UDS) oscillations under a variety of conditions. These UDS have been extensively studied because of the insight they can yield into the functioning of cortical networks, and their proposed role in putative memory formation. A key element in these studies is determining the precise duration and timing of the UDS. These states are typically determined from the membrane potential of one or a small number of cells, which is often not sufficient to reliably estimate the state of an ensemble of neocortical neurons. The local field potential (LFP) provides an attractive method for determining the state of a patch of cortex with high spatio-temporal resolution; however current methods for inferring UDS from LFP signals lack the robustness and flexibility to be applicable when UDS properties may vary substantially within and across experiments. Here we present an explicit-duration hidden Markov model (EDHMM) framework that is sufficiently general to allow statistically principled inference of UDS from different types of signals (membrane potential, LFP, EEG), combinations of signals (e.g., multichannel LFP recordings) and signal features over long recordings where substantial non-stationarities are present. Using cortical LFPs recorded from urethane-anesthetized mice, we demonstrate that the proposed method allows robust inference of UDS. To illustrate the flexibility of the algorithm we show that it performs well on EEG recordings as well. We then validate these results using simultaneous recordings of the LFP and membrane potential (MP) of nearby cortical neurons, showing that our method offers significant improvements over standard methods. These results could be useful for determining functional connectivity of different brain regions, as well as understanding network dynamics. PMID:21738730

  6. Markov and non-Markov processes in complex systems by the dynamical information entropy

    NASA Astrophysics Data System (ADS)

    Yulmetyev, R. M.; Gafarov, F. M.

    1999-12-01

    We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.

  7. Quantum Markov Semigroups with Unbounded Generator and Time Evolution of the Support Projection of a State

    NASA Astrophysics Data System (ADS)

    Gliouez, Souhir; Hachicha, Skander; Nasroui, Ikbel

    We characterize the support projection of a state evolving under the action of a quantum Markov semigroup with unbounded generator represented in the generalized GKSL form and a quantum version of the classical Lévy-Austin-Ornstein theorem.

  8. Two Person Zero-Sum Semi-Markov Games with Unknown Holding Times Distribution on One Side: A Discounted Payoff Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minjarez-Sosa, J. Adolfo, E-mail: aminjare@gauss.mat.uson.mx; Luque-Vasquez, Fernando

    This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies.

  9. Assessing the Progress and the Underlying Nature of the Flows of Doctoral and Master Degree Candidates Using Absorbing Markov Chains

    ERIC Educational Resources Information Center

    Nicholls, Miles G.

    2007-01-01

    In this paper, absorbing markov chains are used to analyse the flows of higher degree by research candidates (doctoral and master) within an Australian faculty of business. The candidates are analysed according to whether they are full time or part time. The need for such analysis stemmed from what appeared to be a rather poor completion rate (as…

  10. Lack of benefit from percutaneous intervention of persistently occluded infarct arteries after the acute phase of myocardial infarction is time independent: insights from Occluded Artery Trial

    PubMed Central

    Menon, Venu; Pearte, Camille A.; Buller, Christopher E.; Steg, Ph.Gabriel; Forman, Sandra A.; White, Harvey D.; Marino, Paolo N.; Katritsis, Demosthenes G.; Caramori, Paulo; Lasevitch, Ricardo; Loboz-Grudzien, Krystyna; Zurakowski, Aleksander; Lamas, Gervasio A.; Hochman, Judith S.

    2009-01-01

    Aims The Occluded Artery Trial (OAT) (n = 2201) showed no benefit for routine percutaneous intervention (PCI) (n = 1101) over medical therapy (MED) (n = 1100) on the combined endpoint of death, myocardial infarction (MI), and class IV heart failure (congestive heart failure) in stable post-MI patients with late occluded infarct-related arteries (IRAs). We evaluated the potential for selective benefit with PCI over MED for patients enrolled early in OAT. Methods and results We explored outcomes with PCI over MED in patients randomized to the ≤3 calendar days and ≤7 calendar days post-MI time windows. Earlier, times to randomization in OAT were associated with higher rates of the combined endpoint (adjusted HR 1.04/day: 99% CI 1.01–1.06; P < 0.001). The 48-month event rates for ≤3 days, ≤7 days post-MI enrolled patients were similar for PCI vs. MED for the combined and individual endpoints. There was no interaction between time to randomization defined as a continuous (P = 0.55) or categorical variable with a cut-point of 3 days (P = 0.98) or 7 days (P = 0.64) post-MI and treatment effect. Conclusion Consistent with overall OAT findings, patients enrolled in the ≤3 day and ≤7 day post-MI time windows derived no benefit with PCI over MED with no interaction between time to randomization and treatment effect. Our findings do not support routine PCI of the occluded IRA in trial-eligible patients even in the earliest 24–72 h time window. PMID:19028780

  11. Multiscale modelling and analysis of collective decision making in swarm robotics.

    PubMed

    Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey

    2014-01-01

    We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable.

  12. On-Board Real-Time State and Fault Identification for Rovers

    NASA Technical Reports Server (NTRS)

    Washington, Richard

    2000-01-01

    For extended autonomous operation, rovers must identify potential faults to determine whether its execution needs to be halted or not. At the same time, rovers present particular challenges for state estimation techniques: they are subject to environmental influences that affect senior readings during normal and anomalous operation, and the sensors fluctuate rapidly both because of noise and because of the dynamics of the rover's interaction with its environment. This paper presents MAKSI, an on-board method for state estimation and fault diagnosis that is particularly appropriate for rovers. The method is based on a combination of continuous state estimation, wing Kalman filters, and discrete state estimation, wing a Markov-model representation.

  13. On the mixing time in the Wang-Landau algorithm

    NASA Astrophysics Data System (ADS)

    Fadeeva, Marina; Shchur, Lev

    2018-01-01

    We present preliminary results of the investigation of the properties of the Markov random walk in the energy space generated by the Wang-Landau probability. We build transition matrix in the energy space (TMES) using the exact density of states for one-dimensional and two-dimensional Ising models. The spectral gap of TMES is inversely proportional to the mixing time of the Markov chain. We estimate numerically the dependence of the mixing time on the lattice size, and extract the mixing exponent.

  14. Surgical gesture segmentation and recognition.

    PubMed

    Tao, Lingling; Zappella, Luca; Hager, Gregory D; Vidal, René

    2013-01-01

    Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.

  15. Optimum random and age replacement policies for customer-demand multi-state system reliability under imperfect maintenance

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang

    2016-04-01

    This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.

  16. MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes

    USGS Publications Warehouse

    Williams, B.K.

    1988-01-01

    Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.

  17. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    PubMed

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  18. Plant calendar pattern based on rainfall forecast and the probability of its success in Deli Serdang regency of Indonesia

    NASA Astrophysics Data System (ADS)

    Darnius, O.; Sitorus, S.

    2018-03-01

    The objective of this study was to determine the pattern of plant calendar of three types of crops; namely, palawija, rice, andbanana, based on rainfall in Deli Serdang Regency. In the first stage, we forecasted rainfall by using time series analysis, and obtained appropriate model of ARIMA (1,0,0) (1,1,1)12. Based on the forecast result, we designed a plant calendar pattern for the three types of plant. Furthermore, the probability of success in the plant types following the plant calendar pattern was calculated by using the Markov process by discretizing the continuous rainfall data into three categories; namely, Below Normal (BN), Normal (N), and Above Normal (AN) to form the probability transition matrix. Finally, the combination of rainfall forecasting models and the Markov process were used to determine the pattern of cropping calendars and the probability of success in the three crops. This research used rainfall data of Deli Serdang Regency taken from the office of BMKG (Meteorologist Climatology and Geophysics Agency), Sampali Medan, Indonesia.

  19. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  20. Earlier Endpoints Are Required for Hemorrhagic Shock Trials among Severely Injured Patients

    PubMed Central

    Fox, Erin E.; Holcomb, John B.; Wade, Charles E.; Bulger, Eileen M.; Tilley, Barbara C.

    2016-01-01

    Background Choosing the appropriate endpoint for a trauma hemorrhage control trial can determine the likelihood of its success. Recent Phase 3 trials and observational studies have used 24-hour and/or 30-day all-cause mortality as the primary endpoint and some have not used exception from informed consent (EFIC), resulting in multiple failed trials. Five recent high-quality prospective studies among 4,064 hemorrhaging trauma patients provide new evidence to support earlier primary endpoints. Methods The goal of this project was to determine the optimal endpoint for hemorrhage control trials using existing literature and new analyses of previously published data. Results Recent studies among bleeding trauma patients show that hemorrhagic deaths occur rapidly, at a high rate, and in a consistent pattern. Early preventable deaths among trauma patients are largely due to hemorrhage and the median time to hemorrhagic death from admission is 2.0-2.6 hours. Approximately 85% of hemorrhagic deaths occur within 6 hours. The hourly mortality rate due to traumatic injury decreases rapidly after enrollment from 4.6% per hour at 1 hour post-enrollment to 1% per hour at 6 hours to <0.1% per hour by 9 hours and thereafter. Early primary endpoints (within 6 hours) have critically important benefits for hemorrhage control trials, including being congruent with the median time to hemorrhagic death, biologic plausibility, and enabling the use of all-cause mortality, which is definitive and objective. Conclusions Primary endpoints should be congruent with the timing of the disease process. Therefore, if a resuscitation/hemorrhage control intervention is under study, a primary endpoint of all-cause mortality evaluated within the first 6 hours is appropriate. Before choosing the timing of the primary endpoint for a large multicenter trial, we recommend performing a Phase 2 trial under EFIC to better understand the effects of the hemorrhage control intervention and distribution of time to death. When early primary endpoints are used, patients should be monitored for multiple subsequent secondary safety endpoints, including 24 hour and 30 day all-cause mortality as well as the customary safety endpoints. PMID:28207628

  1. Reliability Analysis of the Electrical Control System of Subsea Blowout Preventers Using Markov Models

    PubMed Central

    Liu, Zengkai; Liu, Yonghong; Cai, Baoping

    2014-01-01

    Reliability analysis of the electrical control system of a subsea blowout preventer (BOP) stack is carried out based on Markov method. For the subsea BOP electrical control system used in the current work, the 3-2-1-0 and 3-2-0 input voting schemes are available. The effects of the voting schemes on system performance are evaluated based on Markov models. In addition, the effects of failure rates of the modules and repair time on system reliability indices are also investigated. PMID:25409010

  2. Sub-seasonal-to-seasonal Reservoir Inflow Forecast using Bayesian Hierarchical Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Arumugam, S.

    2017-12-01

    Sub-seasonal-to-seasonal (S2S) (15-90 days) streamflow forecasting is an emerging area of research that provides seamless information for reservoir operation from weather time scales to seasonal time scales. From an operational perspective, sub-seasonal inflow forecasts are highly valuable as these enable water managers to decide short-term releases (15-30 days), while holding water for seasonal needs (e.g., irrigation and municipal supply) and to meet end-of-the-season target storage at a desired level. We propose a Bayesian Hierarchical Hidden Markov Model (BHHMM) to develop S2S inflow forecasts for the Tennessee Valley Area (TVA) reservoir system. Here, the hidden states are predicted by relevant indices that influence the inflows at S2S time scale. The hidden Markov model also captures the both spatial and temporal hierarchy in predictors that operate at S2S time scale with model parameters being estimated as a posterior distribution using a Bayesian framework. We present our work in two steps, namely single site model and multi-site model. For proof of concept, we consider inflows to Douglas Dam, Tennessee, in the single site model. For multisite model we consider reservoirs in the upper Tennessee valley. Streamflow forecasts are issued and updated continuously every day at S2S time scale. We considered precipitation forecasts obtained from NOAA Climate Forecast System (CFSv2) GCM as predictors for developing S2S streamflow forecasts along with relevant indices for predicting hidden states. Spatial dependence of the inflow series of reservoirs are also preserved in the multi-site model. To circumvent the non-normality of the data, we consider the HMM in a Generalized Linear Model setting. Skill of the proposed approach is tested using split sample validation against a traditional multi-site canonical correlation model developed using the same set of predictors. From the posterior distribution of the inflow forecasts, we also highlight different system behavior under varied global and local scale climatic influences from the developed BHMM.

  3. Final analysis of survival outcomes in the phase 3 FIRST trial of up-front treatment for multiple myeloma.

    PubMed

    Facon, Thierry; Dimopoulos, Meletios A; Dispenzieri, Angela; Catalano, John V; Belch, Andrew; Cavo, Michele; Pinto, Antonello; Weisel, Katja; Ludwig, Heinz; Bahlis, Nizar J; Banos, Anne; Tiab, Mourad; Delforge, Michel; Cavenagh, Jamie D; Geraldes, Catarina; Lee, Je-Jung; Chen, Christine; Oriol, Albert; De La Rubia, Javier; White, Darrell; Binder, Daniel; Lu, Jin; Anderson, Kenneth C; Moreau, Philippe; Attal, Michel; Perrot, Aurore; Arnulf, Bertrand; Qiu, Lugui; Roussel, Murielle; Boyle, Eileen; Manier, Salomon; Mohty, Mohamad; Avet-Loiseau, Herve; Leleu, Xavier; Ervin-Haynes, Annette; Chen, Guang; Houck, Vanessa; Benboubker, Lotfi; Hulin, Cyrille

    2018-01-18

    This FIRST trial final analysis examined survival outcomes in patients with transplant-ineligible newly diagnosed multiple myeloma (NDMM) treated with lenalidomide and low-dose dexamethasone until disease progression (Rd continuous), Rd for 72 weeks (18 cycles; Rd18), or melphalan, prednisone, and thalidomide (MPT; 72 weeks). The primary endpoint was progression-free survival (PFS; primary comparison: Rd continuous vs MPT). Overall survival (OS) was a key secondary endpoint (final analysis prespecified ≥60 months' follow-up). Patients were randomized to Rd continuous (n = 535), Rd18 (n = 541), or MPT (n = 547). At a median follow-up of 67 months, PFS was significantly longer with Rd continuous vs MPT (hazard ratio [HR], 0.69; 95% confidence interval [CI], 0.59-0.79; P < .00001) and was similarly extended vs Rd18. Median OS was 10 months longer with Rd continuous vs MPT (59.1 vs 49.1 months; HR, 0.78; 95% CI, 0.67-0.92; P = .0023), and similar with Rd18 (62.3 months). In patients achieving complete or very good partial responses, Rd continuous had an ≈30-month longer median time to next treatment vs Rd18 (69.5 vs 39.9 months). Over half of all patients who received second-line treatment were given a bortezomib-based therapy. Second-line outcomes were improved in patients receiving bortezomib after Rd continuous and Rd18 vs after MPT. No new safety concerns, including risk for secondary malignancies, were observed. Treatment with Rd continuous significantly improved survival outcomes vs MPT, supporting Rd continuous as a standard of care for patients with transplant-ineligible NDMM. This trial was registered at www.clinicaltrials.gov as #NCT00689936 and EudraCT as 2007-004823-39. © 2018 by The American Society of Hematology.

  4. Final analysis of survival outcomes in the phase 3 FIRST trial of up-front treatment for multiple myeloma

    PubMed Central

    Dimopoulos, Meletios A.; Dispenzieri, Angela; Catalano, John V.; Belch, Andrew; Cavo, Michele; Pinto, Antonello; Weisel, Katja; Ludwig, Heinz; Bahlis, Nizar J.; Banos, Anne; Tiab, Mourad; Delforge, Michel; Cavenagh, Jamie D.; Geraldes, Catarina; Lee, Je-Jung; Chen, Christine; Oriol, Albert; De La Rubia, Javier; White, Darrell; Binder, Daniel; Lu, Jin; Anderson, Kenneth C.; Moreau, Philippe; Attal, Michel; Perrot, Aurore; Arnulf, Bertrand; Qiu, Lugui; Roussel, Murielle; Boyle, Eileen; Manier, Salomon; Mohty, Mohamad; Avet-Loiseau, Herve; Leleu, Xavier; Ervin-Haynes, Annette; Chen, Guang; Houck, Vanessa; Benboubker, Lotfi; Hulin, Cyrille

    2018-01-01

    This FIRST trial final analysis examined survival outcomes in patients with transplant-ineligible newly diagnosed multiple myeloma (NDMM) treated with lenalidomide and low-dose dexamethasone until disease progression (Rd continuous), Rd for 72 weeks (18 cycles; Rd18), or melphalan, prednisone, and thalidomide (MPT; 72 weeks). The primary endpoint was progression-free survival (PFS; primary comparison: Rd continuous vs MPT). Overall survival (OS) was a key secondary endpoint (final analysis prespecified ≥60 months’ follow-up). Patients were randomized to Rd continuous (n = 535), Rd18 (n = 541), or MPT (n = 547). At a median follow-up of 67 months, PFS was significantly longer with Rd continuous vs MPT (hazard ratio [HR], 0.69; 95% confidence interval [CI], 0.59-0.79; P < .00001) and was similarly extended vs Rd18. Median OS was 10 months longer with Rd continuous vs MPT (59.1 vs 49.1 months; HR, 0.78; 95% CI, 0.67-0.92; P = .0023), and similar with Rd18 (62.3 months). In patients achieving complete or very good partial responses, Rd continuous had an ≈30-month longer median time to next treatment vs Rd18 (69.5 vs 39.9 months). Over half of all patients who received second-line treatment were given a bortezomib-based therapy. Second-line outcomes were improved in patients receiving bortezomib after Rd continuous and Rd18 vs after MPT. No new safety concerns, including risk for secondary malignancies, were observed. Treatment with Rd continuous significantly improved survival outcomes vs MPT, supporting Rd continuous as a standard of care for patients with transplant-ineligible NDMM. This trial was registered at www.clinicaltrials.gov as #NCT00689936 and EudraCT as 2007-004823-39. PMID:29150421

  5. Medical imaging feasibility in body fluids using Markov chains

    NASA Astrophysics Data System (ADS)

    Kavehrad, M.; Armstrong, A. D.

    2017-02-01

    A relatively wide field-of-view and high resolution imaging is necessary for navigating the scope within the body, inspecting tissue, diagnosing disease, and guiding surgical interventions. As the large number of modes available in the multimode fibers (MMF) provides higher resolution, MMFs could replace the millimeters-thick bundles of fibers and lenses currently used in endoscopes. However, attributes of body fluids and obscurants such as blood, impose perennial limitations on resolution and reliability of optical imaging inside human body. To design and evaluate optimum imaging techniques that operate under realistic body fluids conditions, a good understanding of the channel (medium) behavior is necessary. In most prior works, Monte-Carlo Ray Tracing (MCRT) algorithm has been used to analyze the channel behavior. This task is quite numerically intensive. The focus of this paper is on investigating the possibility of simplifying this task by a direct extraction of state transition matrices associated with standard Markov modeling from the MCRT computer simulations programs. We show that by tracing a photon's trajectory in the body fluids via a Markov chain model, the angular distribution can be calculated by simple matrix multiplications. We also demonstrate that the new approach produces result that are close to those obtained by MCRT and other known methods. Furthermore, considering the fact that angular, spatial, and temporal distributions of energy are inter-related, mixing time of Monte- Carlo Markov Chain (MCMC) for different types of liquid concentrations is calculated based on Eigen-analysis of the state transition matrix and possibility of imaging in scattering media are investigated. To this end, we have started to characterize the body fluids that reduce the resolution of imaging [1].

  6. A neural network strategy for end-point optimization of batch processes.

    PubMed

    Krothapally, M; Palanki, S

    1999-01-01

    The traditional way of operating batch processes has been to utilize an open-loop "golden recipe". However, there can be substantial batch to batch variation in process conditions and this open-loop strategy can lead to non-optimal operation. In this paper, a new approach is presented for end-point optimization of batch processes by utilizing neural networks. This strategy involves the training of two neural networks; one to predict switching times and the other to predict the input profile in the singular region. This approach alleviates the computational problems associated with the classical Pontryagin's approach and the nonlinear programming approach. The efficacy of this scheme is illustrated via simulation of a fed-batch fermentation.

  7. Worst error performance of continuous Kalman filters. [for deep space navigation and maneuvers

    NASA Technical Reports Server (NTRS)

    Nishimura, T.

    1975-01-01

    The worst error performance of estimation filters is investigated for continuous systems in this paper. The pathological performance study, without assuming any dynamical model such as Markov processes for perturbations, except for its bounded amplitude, will give practical and dependable criteria in establishing the navigation and maneuver strategy in deep space missions.

  8. Injury Rates in Age-Only Versus Age-and-Weight Playing Standard Conditions in American Youth Football

    PubMed Central

    Kerr, Zachary Y.; Marshall, Stephen W.; Simon, Janet E.; Hayden, Ross; Snook, Erin M.; Dodge, Thomas; Gallo, Joseph A.; Valovich McLeod, Tamara C.; Mensch, James; Murphy, Joseph M.; Nittoli, Vincent C.; Dompier, Thomas P.; Ragan, Brian; Yeargin, Susan W.; Parsons, John T.

    2015-01-01

    Background: American youth football leagues are typically structured using either age-only (AO) or age-and-weight (AW) playing standard conditions. These playing standard conditions group players by age in the former condition and by a combination of age and weight in the latter condition. However, no study has systematically compared injury risk between these 2 playing standards. Purpose: To compare injury rates between youth tackle football players in the AO and AW playing standard conditions. Study Design: Cohort study; Level of evidence, 2. Methods: Athletic trainers evaluated and recorded injuries at each practice and game during the 2012 and 2013 football seasons. Players (age, 5-14 years) were drawn from 13 recreational leagues across 6 states. The sample included 4092 athlete-seasons (AW, 2065; AO, 2027) from 210 teams (AW, 106; O, 104). Injury rate ratios (RRs) with 95% CIs were used to compare the playing standard conditions. Multivariate Poisson regression was used to estimate RRs adjusted for residual effects of age and clustering by team and league. There were 4 endpoints of interest: (1) any injury, (2) non–time loss (NTL) injuries only, (3) time loss (TL) injuries only, and (4) concussions only. Results: Over 2 seasons, the cohort accumulated 1475 injuries and 142,536 athlete-exposures (AEs). The most common injuries were contusions (34.4%), ligament sprains (16.3%), concussions (9.6%), and muscle strains (7.8%). The overall injury rate for both playing standard conditions combined was 10.3 per 1000 AEs (95% CI, 9.8-10.9). The TL injury, NTL injury, and concussion rates in both playing standard conditions combined were 3.1, 7.2, and 1.0 per 1000 AEs, respectively. In multivariate Poisson regression models controlling for age, team, and league, no differences were found between playing standard conditions in the overall injury rate (RRoverall, 1.1; 95% CI, 0.4-2.6). Rates for the other 3 endpoints were also similar (RRNTL, 1.1 [95% CI, 0.4-3.0]; RRTL, 0.9 [95% CI, 0.4-1.9]; RRconcussion, 0.6 [95% CI, 0.3-1.4]). Conclusion: For the injury endpoints examined in this study, the injury rates were similar in the AO and AW playing standards. Future research should examine other policies, rules, and behavioral factors that may affect injury risk within youth football. PMID:26672778

  9. Discrete-time Markovian stochastic Petri nets

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco

    1995-01-01

    We revisit and extend the original definition of discrete-time stochastic Petri nets, by allowing the firing times to have a 'defective discrete phase distribution'. We show that this formalism still corresponds to an underlying discrete-time Markov chain. The structure of the state for this process describes both the marking of the Petri net and the phase of the firing time for each transition, resulting in a large state space. We then modify the well-known power method to perform a transient analysis even when the state space is infinite, subject to the condition that only a finite number of states can be reached in a finite amount of time. Since the memory requirements might still be excessive, we suggest a bounding technique based on truncation.

  10. Application of process analytical technology for monitoring freeze-drying of an amorphous protein formulation: use of complementary tools for real-time product temperature measurements and endpoint detection.

    PubMed

    Schneid, Stefan C; Johnson, Robert E; Lewis, Lavinia M; Stärtzel, Peter; Gieseler, Henning

    2015-05-01

    Process analytical technology (PAT) and quality by design have gained importance in all areas of pharmaceutical development and manufacturing. One important method for monitoring of critical product attributes and process optimization in laboratory scale freeze-drying is manometric temperature measurement (MTM). A drawback of this innovative technology is that problems are encountered when processing high-concentrated amorphous materials, particularly protein formulations. In this study, a model solution of bovine serum albumin and sucrose was lyophilized at both conservative and aggressive primary drying conditions. Different temperature sensors were employed to monitor product temperatures. The residual moisture content at primary drying endpoints as indicated by temperature sensors and batch PAT methods was quantified from extracted sample vials. The data from temperature probes were then used to recalculate critical product parameters, and the results were compared with MTM data. The drying endpoints indicated by the temperature sensors were not suitable for endpoint indication, in contrast to the batch methods endpoints. The accuracy of MTM Pice data was found to be influenced by water reabsorption. Recalculation of Rp and Pice values based on data from temperature sensors and weighed vials was possible. Overall, extensive information about critical product parameters could be obtained using data from complementary PAT tools. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  11. Deterministic and stochastic models for middle east respiratory syndrome (MERS)

    NASA Astrophysics Data System (ADS)

    Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning

    2018-03-01

    World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.

  12. A queueing theory based model for business continuity in hospitals.

    PubMed

    Miniati, R; Cecconi, G; Dori, F; Frosini, F; Iadanza, E; Biffi Gentili, G; Niccolini, F; Gusinu, R

    2013-01-01

    Clinical activities can be seen as results of precise and defined events' succession where every single phase is characterized by a waiting time which includes working duration and possible delay. Technology makes part of this process. For a proper business continuity management, planning the minimum number of devices according to the working load only is not enough. A risk analysis on the whole process should be carried out in order to define which interventions and extra purchase have to be made. Markov models and reliability engineering approaches can be used for evaluating the possible interventions and to protect the whole system from technology failures. The following paper reports a case study on the application of the proposed integrated model, including risk analysis approach and queuing theory model, for defining the proper number of device which are essential to guarantee medical activity and comply the business continuity management requirements in hospitals.

  13. Spatial generalised linear mixed models based on distances.

    PubMed

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  14. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.

    PubMed

    Dhar, Amrit; Minin, Vladimir N

    2017-05-01

    Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.

  15. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    PubMed Central

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  16. Inhibition of the cluster of differentiation 14 innate immunity pathway with IAXO-101 improves chronic microelectrode performance

    NASA Astrophysics Data System (ADS)

    Hermann, John K.; Ravikumar, Madhumitha; Shoffstall, Andrew J.; Ereifej, Evon S.; Kovach, Kyle M.; Chang, Jeremy; Soffer, Arielle; Wong, Chun; Srivastava, Vishnupriya; Smith, Patrick; Protasiewicz, Grace; Jiang, Jingle; Selkirk, Stephen M.; Miller, Robert H.; Sidik, Steven; Ziats, Nicholas P.; Taylor, Dawn M.; Capadona, Jeffrey R.

    2018-04-01

    Objective. Neuroinflammatory mechanisms are hypothesized to contribute to intracortical microelectrode failures. The cluster of differentiation 14 (CD14) molecule is an innate immunity receptor involved in the recognition of pathogens and tissue damage to promote inflammation. The goal of the study was to investigate the effect of CD14 inhibition on intracortical microelectrode recording performance and tissue integration. Approach. Mice implanted with intracortical microelectrodes in the motor cortex underwent electrophysiological characterization for 16 weeks, followed by endpoint histology. Three conditions were examined: (1) wildtype control mice, (2) knockout mice lacking CD14, and (3) wildtype control mice administered a small molecule inhibitor to CD14 called IAXO-101. Main results. The CD14 knockout mice exhibited acute but not chronic improvements in intracortical microelectrode performance without significant differences in endpoint histology. Mice receiving IAXO-101 exhibited significant improvements in recording performance over the entire 16 week duration without significant differences in endpoint histology. Significance. Full removal of CD14 is beneficial at acute time ranges, but limited CD14 signaling is beneficial at chronic time ranges. Innate immunity receptor inhibition strategies have the potential to improve long-term intracortical microelectrode performance.

  17. Extent and Degree of Shoreline Oiling: Deepwater Horizon Oil Spill, Gulf of Mexico, USA

    PubMed Central

    Michel, Jacqueline; Owens, Edward H.; Zengel, Scott; Graham, Andrew; Nixon, Zachary; Allard, Teresa; Holton, William; Reimer, P. Doug; Lamarche, Alain; White, Mark; Rutherford, Nicolle; Childs, Carl; Mauseth, Gary; Challenger, Greg; Taylor, Elliott

    2013-01-01

    The oil from the 2010 Deepwater Horizon spill in the Gulf of Mexico was documented by shoreline assessment teams as stranding on 1,773 km of shoreline. Beaches comprised 50.8%, marshes 44.9%, and other shoreline types 4.3% of the oiled shoreline. Shoreline cleanup activities were authorized on 660 km, or 73.3% of oiled beaches and up to 71 km, or 8.9% of oiled marshes and associated habitats. One year after the spill began, oil remained on 847 km; two years later, oil remained on 687 km, though at much lesser degrees of oiling. For example, shorelines characterized as heavily oiled went from a maximum of 360 km, to 22.4 km one year later, and to 6.4 km two years later. Shoreline cleanup has been conducted to meet habitat-specific cleanup endpoints and will continue until all oiled shoreline segments meet endpoints. The entire shoreline cleanup program has been managed under the Shoreline Cleanup Assessment Technique (SCAT) Program, which is a systematic, objective, and inclusive process to collect data on shoreline oiling conditions and support decision making on appropriate cleanup methods and endpoints. It was a particularly valuable and effective process during such a complex spill. PMID:23776444

  18. An investigation into the two-stage meta-analytic copula modelling approach for evaluating time-to-event surrogate endpoints which comprise of one or more events of interest.

    PubMed

    Dimier, Natalie; Todd, Susan

    2017-09-01

    Clinical trials of experimental treatments must be designed with primary endpoints that directly measure clinical benefit for patients. In many disease areas, the recognised gold standard primary endpoint can take many years to mature, leading to challenges in the conduct and quality of clinical studies. There is increasing interest in using shorter-term surrogate endpoints as substitutes for costly long-term clinical trial endpoints; such surrogates need to be selected according to biological plausibility, as well as the ability to reliably predict the unobserved treatment effect on the long-term endpoint. A number of statistical methods to evaluate this prediction have been proposed; this paper uses a simulation study to explore one such method in the context of time-to-event surrogates for a time-to-event true endpoint. This two-stage meta-analytic copula method has been extensively studied for time-to-event surrogate endpoints with one event of interest, but thus far has not been explored for the assessment of surrogates which have multiple events of interest, such as those incorporating information directly from the true clinical endpoint. We assess the sensitivity of the method to various factors including strength of association between endpoints, the quantity of data available, and the effect of censoring. In particular, we consider scenarios where there exist very little data on which to assess surrogacy. Results show that the two-stage meta-analytic copula method performs well under certain circumstances and could be considered useful in practice, but demonstrates limitations that may prevent universal use. Copyright © 2017 John Wiley & Sons, Ltd.

  19. The intermediate endpoint effect in logistic and probit regression

    PubMed Central

    MacKinnon, DP; Lockwood, CM; Brown, CH; Wang, W; Hoffman, JM

    2010-01-01

    Background An intermediate endpoint is hypothesized to be in the middle of the causal sequence relating an independent variable to a dependent variable. The intermediate variable is also called a surrogate or mediating variable and the corresponding effect is called the mediated, surrogate endpoint, or intermediate endpoint effect. Clinical studies are often designed to change an intermediate or surrogate endpoint and through this intermediate change influence the ultimate endpoint. In many intermediate endpoint clinical studies the dependent variable is binary, and logistic or probit regression is used. Purpose The purpose of this study is to describe a limitation of a widely used approach to assessing intermediate endpoint effects and to propose an alternative method, based on products of coefficients, that yields more accurate results. Methods The intermediate endpoint model for a binary outcome is described for a true binary outcome and for a dichotomization of a latent continuous outcome. Plots of true values and a simulation study are used to evaluate the different methods. Results Distorted estimates of the intermediate endpoint effect and incorrect conclusions can result from the application of widely used methods to assess the intermediate endpoint effect. The same problem occurs for the proportion of an effect explained by an intermediate endpoint, which has been suggested as a useful measure for identifying intermediate endpoints. A solution to this problem is given based on the relationship between latent variable modeling and logistic or probit regression. Limitations More complicated intermediate variable models are not addressed in the study, although the methods described in the article can be extended to these more complicated models. Conclusions Researchers are encouraged to use an intermediate endpoint method based on the product of regression coefficients. A common method based on difference in coefficient methods can lead to distorted conclusions regarding the intermediate effect. PMID:17942466

  20. Modeling the coupled return-spread high frequency dynamics of large tick assets

    NASA Astrophysics Data System (ADS)

    Curato, Gianbiagio; Lillo, Fabrizio

    2015-01-01

    Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We present an approach based on the hidden Markov model, also known in econometrics as the Markov switching model, for the dynamics of price changes, where the latent Markov process is described by the transitions between spreads. We then use a finite Markov mixture of logit regressions on past squared price changes to describe temporal dependencies in the dynamics of price changes. The model can thus be seen as a double chain Markov model. We show that the model describes the shape of the price change distribution at different time scales, volatility clustering, and the anomalous decrease of kurtosis. We calibrate our models based on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.

  1. Effectiveness of ice-vest cooling in prolonging work tolerance time during heavy exercise in the heat for personnel wearing Canadian forces chemical defense ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bain, B.

    Effectiveness of a portable, ice-pack cooling vest (Steelevest) in prolonging work tolerance time in chemical defense clothing in the heat (33 C dry bulb, 33% relative humidity or 25 C WBGT) was evaluated while subjects exercised at a metabolic rate of approx. 700 watts. Subjects were six male volunteers. The protocol consisted of a 20 minute treadmill walk at 1.33 m/s. and 7.5% grade, followed by 15 minutes of a lifting task, 5 minutes rest, then another 20 minutes of lifting task for a total of one hour. The lifting task consisted of lifting of 20 kg box, carrying itmore » 3 meters and setting it down. This was followed by a 6 m walk (3m back to the start point and 3 m back to the box) 15 sec after which the lifting cycle began again. The work was classified as heavy as previously defined. This protocol was repeated until the subjects were unable to continue or they reached a physiological endpoint. Time to voluntary cessation or physiological endpoint was called the work tolerance time. Physiological endpoints were rectal temperature of 39 C, heart rate exceeding 95% of maximum for two consecutive minutes or visible loss of motor control or nausea. The cooling vest had no effect on work tolerance time, rate of rise of rectal temperature or sweat loss. It was concluded that the Steelvest ice-vest is ineffective in prolonging work tolerance time and preventing increases in rectal temperature while wearing chemical protective clothing.« less

  2. zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm.

    PubMed

    Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas

    2013-11-22

    Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.

  3. Detecting critical state before phase transition of complex systems by hidden Markov model

    NASA Astrophysics Data System (ADS)

    Liu, Rui; Chen, Pei; Li, Yongjun; Chen, Luonan

    Identifying the critical state or pre-transition state just before the occurrence of a phase transition is a challenging task, because the state of the system may show little apparent change before this critical transition during the gradual parameter variations. Such dynamics of phase transition is generally composed of three stages, i.e., before-transition state, pre-transition state, and after-transition state, which can be considered as three different Markov processes. Thus, based on this dynamical feature, we present a novel computational method, i.e., hidden Markov model (HMM), to detect the switching point of the two Markov processes from the before-transition state (a stationary Markov process) to the pre-transition state (a time-varying Markov process), thereby identifying the pre-transition state or early-warning signals of the phase transition. To validate the effectiveness, we apply this method to detect the signals of the imminent phase transitions of complex systems based on the simulated datasets, and further identify the pre-transition states as well as their critical modules for three real datasets, i.e., the acute lung injury triggered by phosgene inhalation, MCF-7 human breast cancer caused by heregulin, and HCV-induced dysplasia and hepatocellular carcinoma.

  4. An adaptive multi-level simulation algorithm for stochastic biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less

  5. Monitoring volcano activity through Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Cassisi, C.; Montalto, P.; Prestifilippo, M.; Aliotta, M.; Cannata, A.; Patanè, D.

    2013-12-01

    During 2011-2013, Mt. Etna was mainly characterized by cyclic occurrences of lava fountains, totaling to 38 episodes. During this time interval Etna volcano's states (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN), whose automatic recognition is very useful for monitoring purposes, turned out to be strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area. Since RMS time series behavior is considered to be stochastic, we can try to model the system generating its values, assuming to be a Markov process, by using Hidden Markov models (HMMs). HMMs are a powerful tool in modeling any time-varying series. HMMs analysis seeks to recover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by the SAX (Symbolic Aggregate approXimation) technique, which maps RMS time series values with discrete literal emissions. The experiments show how it is possible to guess volcano states by means of HMMs and SAX.

  6. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series.

    PubMed

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  7. Modeling long correlation times using additive binary Markov chains: Applications to wind generation time series

    NASA Astrophysics Data System (ADS)

    Weber, Juliane; Zachow, Christopher; Witthaut, Dirk

    2018-03-01

    Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.

  8. Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains.

    PubMed

    Reich, W; Scheuermann, G

    2012-12-01

    Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.

  9. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    PubMed

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Many roads to synchrony: natural time scales and their algorithms.

    PubMed

    James, Ryan G; Mahoney, John R; Ellison, Christopher J; Crutchfield, James P

    2014-04-01

    We consider two important time scales-the Markov and cryptic orders-that monitor how an observer synchronizes to a finitary stochastic process. We show how to compute these orders exactly and that they are most efficiently calculated from the ε-machine, a process's minimal unifilar model. Surprisingly, though the Markov order is a basic concept from stochastic process theory, it is not a probabilistic property of a process. Rather, it is a topological property and, moreover, it is not computable from any finite-state model other than the ε-machine. Via an exhaustive survey, we close by demonstrating that infinite Markov and infinite cryptic orders are a dominant feature in the space of finite-memory processes. We draw out the roles played in statistical mechanical spin systems by these two complementary length scales.

  11. A novelty detection diagnostic methodology for gearboxes operating under fluctuating operating conditions using probabilistic techniques

    NASA Astrophysics Data System (ADS)

    Schmidt, S.; Heyns, P. S.; de Villiers, J. P.

    2018-02-01

    In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.

  12. An informational transition in conditioned Markov chains: Applied to genetics and evolution.

    PubMed

    Zhao, Lei; Lascoux, Martin; Waxman, David

    2016-08-07

    In this work we assume that we have some knowledge about the state of a population at two known times, when the dynamics is governed by a Markov chain such as a Wright-Fisher model. Such knowledge could be obtained, for example, from observations made on ancient and contemporary DNA, or during laboratory experiments involving long term evolution. A natural assumption is that the behaviour of the population, between observations, is related to (or constrained by) what was actually observed. The present work shows that this assumption has limited validity. When the time interval between observations is larger than a characteristic value, which is a property of the population under consideration, there is a range of intermediate times where the behaviour of the population has reduced or no dependence on what was observed and an equilibrium-like distribution applies. Thus, for example, if the frequency of an allele is observed at two different times, then for a large enough time interval between observations, the population has reduced or no dependence on the two observed frequencies for a range of intermediate times. Given observations of a population at two times, we provide a general theoretical analysis of the behaviour of the population at all intermediate times, and determine an expression for the characteristic time interval, beyond which the observations do not constrain the population's behaviour over a range of intermediate times. The findings of this work relate to what can be meaningfully inferred about a population at intermediate times, given knowledge of terminal states. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Recursive utility in a Markov environment with stochastic growth

    PubMed Central

    Hansen, Lars Peter; Scheinkman, José A.

    2012-01-01

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428

  14. Recursive utility in a Markov environment with stochastic growth.

    PubMed

    Hansen, Lars Peter; Scheinkman, José A

    2012-07-24

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

  15. A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Gossman, W. E.

    1986-01-01

    A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.

  16. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  17. Modelling breast cancer tumour growth for a stable disease population.

    PubMed

    Isheden, Gabriel; Humphreys, Keith

    2017-01-01

    Statistical models of breast cancer tumour progression have been used to further our knowledge of the natural history of breast cancer, to evaluate mammography screening in terms of mortality, to estimate overdiagnosis, and to estimate the impact of lead-time bias when comparing survival times between screen detected cancers and cancers found outside of screening programs. Multi-state Markov models have been widely used, but several research groups have proposed other modelling frameworks based on specifying an underlying biological continuous tumour growth process. These continuous models offer some advantages over multi-state models and have been used, for example, to quantify screening sensitivity in terms of mammographic density, and to quantify the effect of body size covariates on tumour growth and time to symptomatic detection. As of yet, however, the continuous tumour growth models are not sufficiently developed and require extensive computing to obtain parameter estimates. In this article, we provide a detailed description of the underlying assumptions of the continuous tumour growth model, derive new theoretical results for the model, and show how these results may help the development of this modelling framework. In illustrating the approach, we develop a model for mammography screening sensitivity, using a sample of 1901 post-menopausal women diagnosed with invasive breast cancer.

  18. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    PubMed Central

    Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-01-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629

  19. MC3: Multi-core Markov-chain Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  20. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  1. Birds and flame retardants: A review of the toxic effects on birds of historical and novel flame retardants.

    PubMed

    Guigueno, Mélanie F; Fernie, Kim J

    2017-04-01

    Flame retardants (FRs) are a diverse group of chemicals, many of which persist in the environment and bioaccumulate in biota. Although some FRs have been withdrawn from manufacturing and commerce (e.g., legacy FRs), many continue to be detected in the environment; moreover, their replacements and/or other novel FRs are also detected in biota. Here, we review and summarize the literature on the toxic effects of various FRs on birds. Birds integrate chemical information (exposure, effects) across space and time, making them ideal sentinels of environmental contamination. Following an adverse outcome pathway (AOP) approach, we synthesized information on 8 of the most commonly reported endpoints in avian FR toxicity research: molecular measures, thyroid-related measures, steroids, retinol, brain anatomy, behaviour, growth and development, and reproduction. We then identified which of these endpoints appear more/most sensitive to FR exposure, as determined by the frequency of significant effects across avian studies. The avian thyroid system, largely characterized by inconsistent changes in circulating thyroid hormones that were the only measure in many such studies, appears to be moderately sensitive to FR exposure relative to the other endpoints; circulating thyroid hormones, after reproductive measures, being the most frequently examined endpoint. A more comprehensive examination with concurrent measurements of multiple thyroid endpoints (e.g., thyroid gland, deiodinase enzymes) is recommended for future studies to more fully understand potential avian thyroid toxicity of FRs. More research is required to determine the effects of various FRs on avian retinol concentrations, inconsistently sensitive across species, and to concurrently assess multiple steroid hormones. Behaviour related to courtship and reproduction was the most sensitive of all selected endpoints, with significant effects recorded in every study. Among domesticated species (Galliformes), raptors (Accipitriformes and Falconiformes), songbirds (Passeriformes), and other species of birds (e.g. gulls), raptors seem to be the most sensitive to FR exposure across these measurements. We recommend that future avian research connect biochemical disruptions and changes in the brain to ecologically relevant endpoints, such as behaviour and reproduction. Moreover, connecting in vivo endpoints with molecular endpoints for non-domesticated avian species is also highly important, and essential to linking FR exposure with reduced fitness and population-level effects. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  2. Risk assessment by dynamic representation of vulnerability, exploitation, and impact

    NASA Astrophysics Data System (ADS)

    Cam, Hasan

    2015-05-01

    Assessing and quantifying cyber risk accurately in real-time is essential to providing security and mission assurance in any system and network. This paper presents a modeling and dynamic analysis approach to assessing cyber risk of a network in real-time by representing dynamically its vulnerabilities, exploitations, and impact using integrated Bayesian network and Markov models. Given the set of vulnerabilities detected by a vulnerability scanner in a network, this paper addresses how its risk can be assessed by estimating in real-time the exploit likelihood and impact of vulnerability exploitation on the network, based on real-time observations and measurements over the network. The dynamic representation of the network in terms of its vulnerabilities, sensor measurements, and observations is constructed dynamically using the integrated Bayesian network and Markov models. The transition rates of outgoing and incoming links of states in hidden Markov models are used in determining exploit likelihood and impact of attacks, whereas emission rates help quantify the attack states of vulnerabilities. Simulation results show the quantification and evolving risk scores over time for individual and aggregated vulnerabilities of a network.

  3. Improved design of prodromal Alzheimer's disease trials through cohort enrichment and surrogate endpoints.

    PubMed

    Macklin, Eric A; Blacker, Deborah; Hyman, Bradley T; Betensky, Rebecca A

    2013-01-01

    Alzheimer's disease (AD) trials initiated during or before the prodrome are costly and lengthy because patients are enrolled long before clinical symptoms are apparent, when disease progression is slow. We hypothesized that design of such trials could be improved by: 1) selecting individuals at moderate near-term risk of progression to AD dementia (the current clinical standard) and 2) by using short-term surrogate endpoints that predict progression to AD dementia. We used a longitudinal cohort of older, initially non-demented, community-dwelling participants (n = 358) to derive selection criteria and surrogate endpoints and tested them in an independent national data set (n = 6,243). To identify a "mid-risk" subgroup, we applied conditional tree-based survival models to Clinical Dementia Rating (CDR) scale scores and common neuropsychological tests. In the validation cohort, a time-to-AD dementia trial applying these mid-risk selection criteria to a pool of all non-demented individuals could achieve equivalent power with 47% fewer participants than enrolling at random from that pool. We evaluated surrogate endpoints measureable over two years of follow-up based on cross-validated concordance between predictions from Cox models and observed time to AD dementia. The best performing surrogate, rate of change in CDR sum-of-boxes, did not reduce the trial duration required for equivalent power using estimates from the validation cohort, but alternative surrogates with better ability to predict time to AD dementia should be able to do so. The approach tested here might improve efficiency of prodromal AD trials using other potential measures and could be generalized to other diseases with long prodromal phases.

  4. Improved design of prodromal Alzheimer’s disease trials through cohort enrichment and surrogate endpoints

    PubMed Central

    Macklin, Eric A.; Blacker, Deborah; Hyman, Bradley T.; Betensky, Rebecca A.

    2013-01-01

    Summary Alzheimer’s disease (AD) trials initiated during or before the prodrome are costly and lengthy because patients are enrolled long before clinical symptoms are apparent, when disease progression is slow. We hypothesized that design of such trials could be improved by: (1) selecting individuals at moderate near-term risk of progression to AD dementia (the current clinical standard) and (2) by using short-term surrogate endpoints that predict progression to AD dementia. We used a longitudinal cohort of older, initially non-demented, community-dwelling participants (n=358) to derive selection criteria and surrogate endpoints and tested them in an independent national data set (n=6,243). To identify a “mid-risk” subgroup, we applied conditional tree-based survival models to Clinical Dementia Rating (CDR) scale scores and common neuropsychological tests. In the validation cohort, a time-to-AD dementia trial applying these mid-risk selection criteria to a pool of all non-demented individuals could achieve equivalent power with 47% fewer participants than enrolling at random from that pool. We evaluated surrogate endpoints measureable over two years of follow-up based on cross-validated concordance between predictions from Cox models and observed time to AD dementia. The best performing surrogate, rate of change in CDR sum-of-boxes, did not reduce the trial duration required for equivalent power using estimates from the validation cohort, but alternative surrogates with better ability to predict time to AD dementia should be able to do so. The approach tested here might improve efficiency of prodromal AD trials using other potential measures and could be generalized to other diseases with long prodromal phases. PMID:23629586

  5. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  6. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  7. Integrated microfluidic technology for sub-lethal and behavioral marine ecotoxicity biotests

    NASA Astrophysics Data System (ADS)

    Huang, Yushi; Reyes Aldasoro, Constantino Carlos; Persoone, Guido; Wlodkowic, Donald

    2015-06-01

    Changes in behavioral traits exhibited by small aquatic invertebrates are increasingly postulated as ethically acceptable and more sensitive endpoints for detection of water-born ecotoxicity than conventional mortality assays. Despite importance of such behavioral biotests, their implementation is profoundly limited by the lack of appropriate biocompatible automation, integrated optoelectronic sensors, and the associated electronics and analysis algorithms. This work outlines development of a proof-of-concept miniaturized Lab-on-a-Chip (LOC) platform for rapid water toxicity tests based on changes in swimming patterns exhibited by Artemia franciscana (Artoxkit M™) nauplii. In contrast to conventionally performed end-point analysis based on counting numbers of dead/immobile specimens we performed a time-resolved video data analysis to dynamically assess impact of a reference toxicant on swimming pattern of A. franciscana. Our system design combined: (i) innovative microfluidic device keeping free swimming Artemia sp. nauplii under continuous microperfusion as a mean of toxin delivery; (ii) mechatronic interface for user-friendly fluidic actuation of the chip; and (iii) miniaturized video acquisition for movement analysis of test specimens. The system was capable of performing fully programmable time-lapse and video-microscopy of multiple samples for rapid ecotoxicity analysis. It enabled development of a user-friendly and inexpensive test protocol to dynamically detect sub-lethal behavioral end-points such as changes in speed of movement or distance traveled by each animal.

  8. A Markovian model of evolving world input-output network

    PubMed Central

    Isacchini, Giulio

    2017-01-01

    The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money. PMID:29065145

  9. Endovascular Management of Patients with Head and Neck Cancers Presenting with Acute Hemorrhage: A Single-Center Retrospective Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilas Boas, P. P.; Castro-Afonso, L. H. de; Monsignore, L. M.

    PurposeAcute hemorrhage associated with cancers of the head and neck is a life-threatening condition that requires immediate action. The aim of this study was to assess the safety and efficacy of endovascular embolization for acute hemorrhage in patients with head and neck cancers.Materials and MethodsData were retrospectively collected from patients with head and neck cancers who underwent endovascular embolization to treat acute hemorrhage. The primary endpoint was the rate of immediate control of hemorrhage during the first 24 h after embolization. The secondary endpoints were technical or clinical complications, rate of re-hemorrhage 24 h after the procedure, time from embolization to re-hemorrhage,more » hospitalization time, mortality rate, and time from embolization to death.ResultsFifty-one patients underwent endovascular embolization. The primary endpoint was achieved in 94% of patients. The rate of technical complications was 5.8%, and no clinical complication was observed. Twelve patients (23.5%) had hemorrhage recurrence after an average time of 127.5 days. The average hospitalization time was 7.4 days, the mortality rate during the follow-up period was 66.6%, and the average time from embolization to death was 132.5 days.ConclusionEndovascular embolization to treat acute hemorrhage in patients with head and neck cancers is a safe and effective method for the immediate control of hemorrhage and results in a high rate of hemorrhage control. Larger studies are necessary to determine which treatment strategy is best for improving patient outcomes.« less

  10. Parametric State Space Structuring

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Tilgner, Marco

    1997-01-01

    Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.

  11. Multiscale Modelling and Analysis of Collective Decision Making in Swarm Robotics

    PubMed Central

    Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey

    2014-01-01

    We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable. PMID:25369026

  12. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    NASA Astrophysics Data System (ADS)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  13. Theoretical restrictions on longest implicit time scales in Markov state models of biomolecular dynamics

    NASA Astrophysics Data System (ADS)

    Sinitskiy, Anton V.; Pande, Vijay S.

    2018-01-01

    Markov state models (MSMs) have been widely used to analyze computer simulations of various biomolecular systems. They can capture conformational transitions much slower than an average or maximal length of a single molecular dynamics (MD) trajectory from the set of trajectories used to build the MSM. A rule of thumb claiming that the slowest implicit time scale captured by an MSM should be comparable by the order of magnitude to the aggregate duration of all MD trajectories used to build this MSM has been known in the field. However, this rule has never been formally proved. In this work, we present analytical results for the slowest time scale in several types of MSMs, supporting the above rule. We conclude that the slowest implicit time scale equals the product of the aggregate sampling and four factors that quantify: (1) how much statistics on the conformational transitions corresponding to the longest implicit time scale is available, (2) how good the sampling of the destination Markov state is, (3) the gain in statistics from using a sliding window for counting transitions between Markov states, and (4) a bias in the estimate of the implicit time scale arising from finite sampling of the conformational transitions. We demonstrate that in many practically important cases all these four factors are on the order of unity, and we analyze possible scenarios that could lead to their significant deviation from unity. Overall, we provide for the first time analytical results on the slowest time scales captured by MSMs. These results can guide further practical applications of MSMs to biomolecular dynamics and allow for higher computational efficiency of simulations.

  14. Phosgene- and chlorine-induced acute lung injury in rats: comparison of cardiopulmonary function and biomarkers in exhaled breath.

    PubMed

    Luo, Sa; Trübel, Hubert; Wang, Chen; Pauluhn, Jürgen

    2014-12-04

    This study compares changes in cardiopulmonary function, selected endpoints in exhaled breath, blood, and bronchoalveolar lavage fluid (BAL) following a single, high-level 30-min nose-only exposure of rats to chlorine and phosgene gas. The time-course of lung injury was systematically examined up to 1-day post-exposure with the objective to identify early diagnostic biomarkers suitable to guide countermeasures to accidental exposures. Chlorine, due to its water solubility, penetrates the lung concentration-dependently whereas the poorly water-soluble phosgene reaches the alveolar region without any appreciable extent of airway injury. Cardiopulmonary endpoints were continually recorded by telemetry and barometric plethysmography for 20h. At several time points blood was collected to evaluate evidence of hemoconcentration, changes in hemostasis, and osteopontin. One day post-exposure, protein, osteopontin, and cytodifferentials were determined in BAL. Nitric oxide (eNO) and eCO2 were non-invasively examined in exhaled breath 5 and 24h post-exposure. Chlorine-exposed rats elaborated a reflexively-induced decreased respiratory rate and bradycardia whereas phosgene-exposed rats developed minimal changes in lung function but a similar magnitude of bradycardia. Despite similar initial changes in cardiac function, the phosgene-exposed rats showed different time-course changes of hemoconcentration and lung weights as compared to chlorine-exposed rats. eNO/eCO2 ratios were most affected in chlorine-exposed rats in the absence of any marked time-related changes. This outcome appears to demonstrate that nociceptive reflexes with changes in cardiopulmonary function resemble typical patterns of mixed airway-alveolar irritation in chlorine-exposed rats and alveolar irritation in phosgene-exposed rats. The degree and time-course of pulmonary injury was reflected best by eNO/eCO2 ratios, hemoconcentration, and protein in BAL. Increased fibrin in blood occurred only in chlorine-exposed rats 1-day post-exposure. Hence, the analysis of NO and CO2 in exhaled breath, including endpoints in blood mirroring changes in the peripheral to pulmonary fluid distribution, seem to be sensitive diagnostic endpoints readily available for early prognostic assessment of severity of injury and efficacy of any chosen countermeasure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.

    PubMed

    Caro, J Jaime

    2016-07-01

    Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.

  16. Markov Model of Accident Progression at Fukushima Daiichi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuadra A.; Bari R.; Cheng, L-Y

    2012-11-11

    On March 11, 2011, a magnitude 9.0 earthquake followed by a tsunami caused loss of offsite power and disabled the emergency diesel generators, leading to a prolonged station blackout at the Fukushima Daiichi site. After successful reactor trip for all operating reactors, the inability to remove decay heat over an extended period led to boil-off of the water inventory and fuel uncovery in Units 1-3. A significant amount of metal-water reaction occurred, as evidenced by the quantities of hydrogen generated that led to hydrogen explosions in the auxiliary buildings of the Units 1 & 3, and in the de-fuelled Unitmore » 4. Although it was assumed that extensive fuel damage, including fuel melting, slumping, and relocation was likely to have occurred in the core of the affected reactors, the status of the fuel, vessel, and drywell was uncertain. To understand the possible evolution of the accident conditions at Fukushima Daiichi, a Markov model of the likely state of one of the reactors was constructed and executed under different assumptions regarding system performance and reliability. The Markov approach was selected for several reasons: It is a probabilistic model that provides flexibility in scenario construction and incorporates time dependence of different model states. It also readily allows for sensitivity and uncertainty analyses of different failure and repair rates of cooling systems. While the analysis was motivated by a need to gain insight on the course of events for the damaged units at Fukushima Daiichi, the work reported here provides a more general analytical basis for studying and evaluating severe accident evolution over extended periods of time. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accidents.« less

  17. A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.

    PubMed

    Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing

    2015-01-01

    Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing.

  18. Zero-state Markov switching count-data models: an empirical assessment.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2010-01-01

    In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.

  19. Integrated stationary Ornstein-Uhlenbeck process, and double integral processes

    NASA Astrophysics Data System (ADS)

    Abundo, Mario; Pirozzi, Enrica

    2018-03-01

    We find a representation of the integral of the stationary Ornstein-Uhlenbeck (ISOU) process in terms of Brownian motion Bt; moreover, we show that, under certain conditions on the functions f and g , the double integral process (DIP) D(t) = ∫βt g(s) (∫αs f(u) dBu) ds can be thought as the integral of a suitable Gauss-Markov process. Some theoretical and application details are given, among them we provide a simulation formula based on that representation by which sample paths, probability densities and first passage times of the ISOU process are obtained; the first-passage times of the DIP are also studied.

  20. Daily remote monitoring of implantable cardioverter-defibrillators: insights from the pooled patient-level data from three randomized controlled trials (IN-TIME, ECOST, TRUST).

    PubMed

    Hindricks, Gerhard; Varma, Niraj; Kacet, Salem; Lewalter, Thorsten; Søgaard, Peter; Guédon-Moreau, Laurence; Proff, Jochen; Gerds, Thomas A; Anker, Stefan D; Torp-Pedersen, Christian

    2017-06-07

    Remote monitoring of implantable cardioverter-defibrillators may improve clinical outcome. A recent meta-analysis of three randomized controlled trials (TRUST, ECOST, IN-TIME) using a specific remote monitoring system with daily transmissions [Biotronik Home Monitoring (HM)] demonstrated improved survival. We performed a patient-level analysis to verify this result with appropriate time-to-event statistics and to investigate further clinical endpoints. Individual data of the TRUST, ECOST, and IN-TIME patients were pooled to calculate absolute risks of endpoints at 1-year follow-up for HM vs. conventional follow-up. All-cause mortality analysis involved all three trials (2405 patients). Other endpoints involved two trials, ECOST and IN-TIME (1078 patients), in which an independent blinded endpoint committee adjudicated the underlying causes of hospitalizations and deaths. The absolute risk of death at 1 year was reduced by 1.9% in the HM group (95% CI: 0.1-3.8%; P = 0.037), equivalent to a risk ratio of 0.62. Also the combined endpoint of all-cause mortality or hospitalization for worsening heart failure (WHF) was significantly reduced (by 5.6%; P = 0.007; risk ratio 0.64). The composite endpoint of all-cause mortality or cardiovascular (CV) hospitalization tended to be reduced by a similar degree (4.1%; P = 0.13; risk ratio 0.85) but without statistical significance. In a pooled analysis of the three trials, HM reduced all-cause mortality and the composite endpoint of all-cause mortality or WHF hospitalization. The similar magnitudes of absolute risk reductions for WHF and CV endpoints suggest that the benefit of HM is driven by the prevention of heart failure exacerbation.

  1. A Lagrangian Transport Eulerian Reaction Spatial (LATERS) Markov Model for Prediction of Effective Bimolecular Reactive Transport

    NASA Astrophysics Data System (ADS)

    Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi

    2017-11-01

    Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.

  2. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  3. The effect of sensory uncertainty due to amblyopia (lazy eye) on the planning and execution of visually-guided 3D reaching movements.

    PubMed

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2012-01-01

    Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50-100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R(2)) which correlates the spatial position of the limb during the movement to endpoint position. Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R(2) values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis.

  4. The Effect of Sensory Uncertainty Due to Amblyopia (Lazy Eye) on the Planning and Execution of Visually-Guided 3D Reaching Movements

    PubMed Central

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C.; Chandrakumar, Manokaraananthan; Wong, Agnes M. F.

    2012-01-01

    Background Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Methods Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50–100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R2) which correlates the spatial position of the limb during the movement to endpoint position. Results Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R2 values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Conclusion Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis. PMID:22363549

  5. Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)

    NASA Astrophysics Data System (ADS)

    Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan

    2010-05-01

    The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.

  6. Lotka-Volterra competition models for sessile organisms.

    PubMed

    Spencer, Matthew; Tanner, Jason E

    2008-04-01

    Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.

  7. Reliability of a Manual Procedure for Marking the EZ Endpoint Location in Patients with Retinitis Pigmentosa

    PubMed Central

    Ramachandran, Rithambara; Cai, Cindy X.; Lee, Dongwon; Epstein, Benjamin C.; Locke, Kirsten G.; Birch, David G.; Hood, Donald C.

    2016-01-01

    Purpose We developed and evaluated a training procedure for marking the endpoints of the ellipsoid zone (EZ), also known as the inner segment/outer segment (IS/OS) border, on frequency domain optical coherence tomography (fdOCT) scans from patients with retinitis pigmentosa (RP). Methods A manual for marking EZ endpoints was developed and used to train 2 inexperienced graders. After training, an experienced grader and the 2 trained graders marked the endpoints on fdOCT horizontal line scans through the macula from 45 patients with RP. They marked the endpoints on these same scans again 1 month later. Results Intragrader agreement was excellent. The intraclass correlation coefficient (ICC) was 0.99, the average difference of endpoint locations (19.6 μm) was close to 0 μm, and the 95% limits were between −284 and 323 μm, approximately ±1.1°. Intergrader agreement also was excellent. The ICC values were 0.98 (time 1) and 0.97 (time 2), the average difference among graders was close to zero, and the 95% limits of these differences was less than 350 μm, approximately 1.2°, for both test times. Conclusions While automated algorithms are becoming increasingly accurate, EZ endpoints still have to be verified manually and corrected when necessary. With training, the inter- and intragrader agreement of manually marked endpoints is excellent. Translational Relevance For clinical studies, the EZ endpoints can be marked by hand if a training procedure, including a manual, is used. The endpoint confidence intervals, well under ±2.0°, are considerably smaller than the 6° spacing for the typically used static visual field. PMID:27226930

  8. Memetic Approaches for Optimizing Hidden Markov Models: A Case Study in Time Series Prediction

    NASA Astrophysics Data System (ADS)

    Bui, Lam Thu; Barlow, Michael

    We propose a methodology for employing memetics (local search) within the framework of evolutionary algorithms to optimize parameters of hidden markov models. With this proposal, the rate and frequency of using local search are automatically changed over time either at a population or individual level. At the population level, we allow the rate of using local search to decay over time to zero (at the final generation). At the individual level, each individual is equipped with information of when it will do local search and for how long. This information evolves over time alongside the main elements of the chromosome representing the individual.

  9. An open Markov chain scheme model for a credit consumption portfolio fed by ARIMA and SARMA processes

    NASA Astrophysics Data System (ADS)

    Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.

    2016-06-01

    We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.

  10. A unified framework for the evaluation of surrogate endpoints in mental-health clinical trials.

    PubMed

    Molenberghs, Geert; Burzykowski, Tomasz; Alonso, Ariel; Assam, Pryseley; Tilahun, Abel; Buyse, Marc

    2010-06-01

    For a number of reasons, surrogate endpoints are considered instead of the so-called true endpoint in clinical studies, especially when such endpoints can be measured earlier, and/or with less burden for patient and experimenter. Surrogate endpoints may occur more frequently than their standard counterparts. For these reasons, it is not surprising that the use of surrogate endpoints in clinical practice is increasing. Building on the seminal work of Prentice(1) and Freedman et al.,(2) Buyse et al. (3) framed the evaluation exercise within a meta-analytic setting, in an effort to overcome difficulties that necessarily surround evaluation efforts based on a single trial. In this article, we review the meta-analytic approach for continuous outcomes, discuss extensions to non-normal and longitudinal settings, as well as proposals to unify the somewhat disparate collection of validation measures currently on the market. Implications for design and for predicting the effect of treatment in a new trial, based on the surrogate, are discussed. A case study in schizophrenia is analysed.

  11. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  12. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  13. Efficient estimation of the distribution of time to composite endpoint when some endpoints are only partially observed

    PubMed Central

    Daniel, Rhian M.; Tsiatis, Anastasios A.

    2014-01-01

    Two common features of clinical trials, and other longitudinal studies, are (1) a primary interest in composite endpoints, and (2) the problem of subjects withdrawing prematurely from the study. In some settings, withdrawal may only affect observation of some components of the composite endpoint, for example when another component is death, information on which may be available from a national registry. In this paper, we use the theory of augmented inverse probability weighted estimating equations to show how such partial information on the composite endpoint for subjects who withdraw from the study can be incorporated in a principled way into the estimation of the distribution of time to composite endpoint, typically leading to increased efficiency without relying on additional assumptions above those that would be made by standard approaches. We describe our proposed approach theoretically, and demonstrate its properties in a simulation study. PMID:23722304

  14. Stochastic-shielding approximation of Markov chains and its application to efficiently simulate random ion-channel gating.

    PubMed

    Schmandt, Nicolaus T; Galán, Roberto F

    2012-09-14

    Markov chains provide realistic models of numerous stochastic processes in nature. We demonstrate that in any Markov chain, the change in occupation number in state A is correlated to the change in occupation number in state B if and only if A and B are directly connected. This implies that if we are only interested in state A, fluctuations in B may be replaced with their mean if state B is not directly connected to A, which shortens computing time considerably. We show the accuracy and efficacy of our approximation theoretically and in simulations of stochastic ion-channel gating in neurons.

  15. Study design of J-ELD AF: A multicenter prospective cohort study to investigate the efficacy and safety of apixaban in Japanese elderly patients.

    PubMed

    Akao, Masaharu; Yamashita, Takeshi; Okumura, Ken

    2016-12-01

    Apixaban, one of the non-vitamin K antagonist oral anticoagulants, was reported to be effective and safe in stroke prevention in patients with atrial fibrillation (AF) based on the global randomized clinical trial, but data are limited on the efficacy and safety of apixaban in Japanese elderly patients. The J-ELD AF Registry is a large-scale, contemporary observational study, continuously and prospectively registering elderly Japanese patients with AF aged 75 years or older who are currently taking apixaban or the elderly who are to receive apixaban in daily clinical practice, and accumulating the outcomes during one-year follow-up period. In addition to standard baseline characteristics, prothrombin time and anti-Xa activity will be measured to investigate the biomarker characteristics. The primary efficacy endpoints will be stroke and systemic embolism, and the primary safety endpoint will be major bleeding requiring hospitalization. The secondary endpoints in this study will be all-cause death, cardiovascular death, acute myocardial infarction, and the composite of stroke/systemic embolism, cardiovascular death, and acute myocardial infarction. As a primary analysis, the primary/secondary endpoints in the enrolled patients will be totalized for the entire group, and the incidence of events will be described by age, CHADS 2 score, HAS-BLED score, and apixaban dose (5 or 2.5mg bid). The factors that independently predict the incidence of the primary/secondary endpoints will be searched for by Cox regression. The relationship between the biomarkers and the primary/secondary endpoints will also be examined in an explorative manner. This study will provide important information on the efficacy and safety of apixaban in elderly Japanese patients aged 75 years or older, and those of low-dose administration of apixaban (2.5mg bid) for which many of the Japanese elderly are indicated. Copyright © 2016 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  16. Useful pharmacodynamic endpoints in children: selection, measurement, and next steps

    PubMed Central

    Kelly, Lauren E; Sinha, Yashwant; Barker, Charlotte I S; Standing, Joseph F; Offringa, Martin

    2018-01-01

    Pharmacodynamic (PD) endpoints are essential for establishing the benefit-to-risk ratio for therapeutic interventions in children and neonates. This article discusses the selection of an appropriate measure of response, the PD endpoint, which is a critical methodological step in designing pediatric efficacy and safety studies. We provide an overview of existing guidance on the choice of PD endpoints in pediatric clinical research. We identified several considerations relevant to the selection and measurement of PD endpoints in pediatric clinical trials, including the use of biomarkers, modeling, compliance, scoring systems, and validated measurement tools. To be useful, PD endpoints in children need to be clinically relevant, responsive to both treatment and/or disease progression, reproducible, and reliable. In most pediatric disease areas, this requires significant validation efforts. We propose a minimal set of criteria for useful PD endpoint selection and measurement. We conclude that, given the current heterogeneity of pediatric PD endpoint definitions and measurements, both across and within defined disease areas, there is an acute need for internationally agreed, validated, and condition-specific pediatric PD endpoints that consider the needs of all stakeholders, including healthcare providers, policy makers, patients, and families. PMID:29667952

  17. Elevated serum creatinine at baseline predicts poor outcome in patients receiving cardiac resynchronization therapy.

    PubMed

    Shalaby, Alaa; El-Saed, Aiman; Voigt, Andrew; Albany, Constantine; Saba, Samir

    2008-05-01

    Renal insufficiency is recognized as a predictor of mortality and poor outcome in heart failure patients. We sought to study the impact of baseline serum creatinine on subsequent outcome in cardiac resynchronization therapy (CRT) recipients. We retrospectively reviewed hospital records of all CRT recipients at Pittsburgh Veterans Affairs (VA) Healthcare System (2003-2005) and University of Pittsburgh Medical Center (2004). We recorded clinical characteristics at the time of implantation including demographics, New York Heart Association (NYHA) functional class, ejection fraction, QRS duration, cardiomyopathy etiology, medical history, medication use, and serum creatinine. Mortality alone and mortality combined with heart failure hospitalization were the study endpoints. Out of the 330 patients studied, a total of 66 (20.0%) patients died over a mean follow-up duration of 19.7 +/- 9.0 months (range 1-44). The cohort was studied by three creatinine tertiles (0.6-1.0, 1.1-1.3, 1.4-3.0 mg/dL). Both study endpoints were observed more frequently in patients in the highest creatinine tertile compared to others (28.7% vs 14.0%, P = 0.008 for death and 41.6% vs 21.5%, P = 0.001 for the combined endpoint). High creatinine remained an independent predictor of mortality (hazard ratio [HR] 1.89, 95% confidence interval [CI] 1.06-3.39, P = 0.032) and the combined endpoint (HR 1.94, 95% CI 1.20-3.13, P = 0.007) in multivariate adjusted models. Studied as a continuous variable, increase in creatinine level by 0.1 mg/dL was associated with an 11% increase in mortality risk and a 7% increase in the combined endpoint. In an unselected cohort of CRT recipients, the baseline creatinine was found to predict worse survival and poor outcome over a modest follow-up duration.

  18. Mode identification using stochastic hybrid models with applications to conflict detection and resolution

    NASA Astrophysics Data System (ADS)

    Naseri Kouzehgarani, Asal

    2009-12-01

    Most models of aircraft trajectories are non-linear and stochastic in nature; and their internal parameters are often poorly defined. The ability to model, simulate and analyze realistic air traffic management conflict detection scenarios in a scalable, composable, multi-aircraft fashion is an extremely difficult endeavor. Accurate techniques for aircraft mode detection are critical in order to enable the precise projection of aircraft conflicts, and for the enactment of altitude separation resolution strategies. Conflict detection is an inherently probabilistic endeavor; our ability to detect conflicts in a timely and accurate manner over a fixed time horizon is traded off against the increased human workload created by false alarms---that is, situations that would not develop into an actual conflict, or would resolve naturally in the appropriate time horizon-thereby introducing a measure of probabilistic uncertainty in any decision aid fashioned to assist air traffic controllers. The interaction of the continuous dynamics of the aircraft, used for prediction purposes, with the discrete conflict detection logic gives rise to the hybrid nature of the overall system. The introduction of the probabilistic element, common to decision alerting and aiding devices, places the conflict detection and resolution problem in the domain of probabilistic hybrid phenomena. A hidden Markov model (HMM) has two stochastic components: a finite-state Markov chain and a finite set of output probability distributions. In other words an unobservable stochastic process (hidden) that can only be observed through another set of stochastic processes that generate the sequence of observations. The problem of self separation in distributed air traffic management reduces to the ability of aircraft to communicate state information to neighboring aircraft, as well as model the evolution of aircraft trajectories between communications, in the presence of probabilistic uncertain dynamics as well as partially observable and uncertain data. We introduce the Hybrid Hidden Markov Modeling (HHMM) formalism to enable the prediction of the stochastic aircraft states (and thus, potential conflicts), by combining elements of the probabilistic timed input output automaton and the partially observable Markov decision process frameworks, along with the novel addition of a Markovian scheduler to remove the non-deterministic elements arising from the enabling of several actions simultaneously. Comparisons of aircraft in level, climbing/descending and turning flight are performed, and unknown flight track data is evaluated probabilistically against the tuned model in order to assess the effectiveness of the model in detecting the switch between multiple flight modes for a given aircraft. This also allows for the generation of probabilistic distribution over the execution traces of the hybrid hidden Markov model, which then enables the prediction of the states of aircraft based on partially observable and uncertain data. Based on the composition properties of the HHMM, we study a decentralized air traffic system where aircraft are moving along streams and can perform cruise, accelerate, climb and turn maneuvers. We develop a common decentralized policy for conflict avoidance with spatially distributed agents (aircraft in the sky) and assure its safety properties via correctness proofs.

  19. Considerations of multiple imputation approaches for handling missing data in clinical trials.

    PubMed

    Quan, Hui; Qi, Li; Luo, Xiaodong; Darchy, Loic

    2018-07-01

    Missing data exist in all clinical trials and missing data issue is a very serious issue in terms of the interpretability of the trial results. There is no universally applicable solution for all missing data problems. Methods used for handling missing data issue depend on the circumstances particularly the assumptions on missing data mechanisms. In recent years, if the missing at random mechanism cannot be assumed, conservative approaches such as the control-based and returning to baseline multiple imputation approaches are applied for dealing with the missing data issues. In this paper, we focus on the variability in data analysis of these approaches. As demonstrated by examples, the choice of the variability can impact the conclusion of the analysis. Besides the methods for continuous endpoints, we also discuss methods for binary and time to event endpoints as well as consideration for non-inferiority assessment. Copyright © 2018. Published by Elsevier Inc.

  20. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    PubMed

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  1. Sur la vitesse d'extinction d'une population dans un environnement aléatoire.

    PubMed

    Bacaër, Nicolas

    2017-05-01

    This study focuses on the speed of extinction of a population living in a random environment that follows a continuous-time Markov chain. Each individual dies or reproduces at a rate that depends on the environment. The number of offspring during reproduction follows a given probability law that also depends on the environment. In the so-called subcritical case where the population goes for sure to extinction, there is an explicit formula for the speed of extinction. In some sense, environmental stochasticity slows down population extinction. Copyright © 2017 Académie des sciences. Published by Elsevier Masson SAS. All rights reserved.

  2. ADM-CLE Approach for Detecting Slow Variables in Continuous Time Markov Chains and Dynamic Data

    DTIC Science & Technology

    2015-04-01

    Observatory Quarter, Woodstock Road, Oxford, OX2 6GG, United Kingdom. E -mail: erban@maths.ox.ac.uk; Radek Erban would like to thank the Royal Society...more e cient than the ADM method for a large class of chemical reaction systems, because it replaces the computationally most expensive step of ADM...i.e., m = 4), given by ∅ k1 −→ X1 k2−→←− k3 X2 k4−→ ∅. (2.2) Throughout the rest of this paper, we shall refer to the chemical system (2.2) as CS-I (i.e

  3. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects

    PubMed Central

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity. PMID:27010993

  4. Structured Modeling and Analysis of Stochastic Epidemics with Immigration and Demographic Effects.

    PubMed

    Baumann, Hendrik; Sandmann, Werner

    2016-01-01

    Stochastic epidemics with open populations of variable population sizes are considered where due to immigration and demographic effects the epidemic does not eventually die out forever. The underlying stochastic processes are ergodic multi-dimensional continuous-time Markov chains that possess unique equilibrium probability distributions. Modeling these epidemics as level-dependent quasi-birth-and-death processes enables efficient computations of the equilibrium distributions by matrix-analytic methods. Numerical examples for specific parameter sets are provided, which demonstrates that this approach is particularly well-suited for studying the impact of varying rates for immigration, births, deaths, infection, recovery from infection, and loss of immunity.

  5. Failure monitoring in dynamic systems: Model construction without fault training data

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Mellstrom, J.

    1993-01-01

    Advances in the use of autoregressive models, pattern recognition methods, and hidden Markov models for on-line health monitoring of dynamic systems (such as DSN antennas) have recently been reported. However, the algorithms described in previous work have the significant drawback that data acquired under fault conditions are assumed to be available in order to train the model used for monitoring the system under observation. This article reports that this assumption can be relaxed and that hidden Markov monitoring models can be constructed using only data acquired under normal conditions and prior knowledge of the system characteristics being measured. The method is described and evaluated on data from the DSS 13 34-m beam wave guide antenna. The primary conclusion from the experimental results is that the method is indeed practical and holds considerable promise for application at the 70-m antenna sites where acquisition of fault data under controlled conditions is not realistic.

  6. Multidimensional Latent Markov Models in a Developmental Study of Inhibitory Control and Attentional Flexibility in Early Childhood

    ERIC Educational Resources Information Center

    Bartolucci, Francesco; Solis-Trapala, Ivonne L.

    2010-01-01

    We demonstrate the use of a multidimensional extension of the latent Markov model to analyse data from studies with repeated binary responses in developmental psychology. In particular, we consider an experiment based on a battery of tests which was administered to pre-school children, at three time periods, in order to measure their inhibitory…

  7. Hidden Markov models for character recognition.

    PubMed

    Vlontzos, J A; Kung, S Y

    1992-01-01

    A hierarchical system for character recognition with hidden Markov model knowledge sources which solve both the context sensitivity problem and the character instantiation problem is presented. The system achieves 97-99% accuracy using a two-level architecture and has been implemented using a systolic array, thus permitting real-time (1 ms per character) multifont and multisize printed character recognition as well as handwriting recognition.

  8. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.

    PubMed

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.

  9. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much

    PubMed Central

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance. PMID:28344429

  10. Persistence and ergodicity of plant disease model with markov conversion and impulsive toxicant input

    NASA Astrophysics Data System (ADS)

    Zhao, Wencai; Li, Juan; Zhang, Tongqian; Meng, Xinzhu; Zhang, Tonghua

    2017-07-01

    Taking into account of both white and colored noises, a stochastic mathematical model with impulsive toxicant input is formulated. Based on this model, we investigate dynamics, such as the persistence and ergodicity, of plant infectious disease model with Markov conversion in a polluted environment. The thresholds of extinction and persistence in mean are obtained. By using Lyapunov functions, we prove that the system is ergodic and has a stationary distribution under certain sufficient conditions. Finally, numerical simulations are employed to illustrate our theoretical analysis.

  11. 50 CFR 226.212 - Critical habitat for 13 Evolutionarily Significant Units (ESUs) of salmon and steelhead...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... floodplain connectivity to form and maintain physical habitat conditions and support juvenile growth and... endpoint(s) in: Crooked Creek (46.3033, -123.6222); East Fork Grays River (46.4425, -123.4081); Fossil...

  12. 50 CFR 226.212 - Critical habitat for 13 Evolutionarily Significant Units (ESUs) of salmon and steelhead...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... floodplain connectivity to form and maintain physical habitat conditions and support juvenile growth and... endpoint(s) in: Crooked Creek (46.3033, -123.6222); East Fork Grays River (46.4425, -123.4081); Fossil...

  13. 50 CFR 226.212 - Critical habitat for 13 Evolutionarily Significant Units (ESUs) of salmon and steelhead...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... floodplain connectivity to form and maintain physical habitat conditions and support juvenile growth and... endpoint(s) in: Crooked Creek (46.3033, -123.6222); East Fork Grays River (46.4425, -123.4081); Fossil...

  14. 50 CFR 226.212 - Critical habitat for 13 Evolutionarily Significant Units (ESUs) of salmon and steelhead...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... floodplain connectivity to form and maintain physical habitat conditions and support juvenile growth and... endpoint(s) in: Crooked Creek (46.3033, -123.6222); East Fork Grays River (46.4425, -123.4081); Fossil...

  15. 50 CFR 226.212 - Critical habitat for 13 Evolutionarily Significant Units (ESUs) of salmon and steelhead...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... floodplain connectivity to form and maintain physical habitat conditions and support juvenile growth and... endpoint(s) in: Crooked Creek (46.3033, -123.6222); East Fork Grays River (46.4425, -123.4081); Fossil...

  16. A two-stage model in a Bayesian framework to estimate a survival endpoint in the presence of confounding by indication.

    PubMed

    Bellera, Carine; Proust-Lima, Cécile; Joseph, Lawrence; Richaud, Pierre; Taylor, Jeremy; Sandler, Howard; Hanley, James; Mathoulin-Pélissier, Simone

    2018-04-01

    Background Biomarker series can indicate disease progression and predict clinical endpoints. When a treatment is prescribed depending on the biomarker, confounding by indication might be introduced if the treatment modifies the marker profile and risk of failure. Objective Our aim was to highlight the flexibility of a two-stage model fitted within a Bayesian Markov Chain Monte Carlo framework. For this purpose, we monitored the prostate-specific antigens in prostate cancer patients treated with external beam radiation therapy. In the presence of rising prostate-specific antigens after external beam radiation therapy, salvage hormone therapy can be prescribed to reduce both the prostate-specific antigens concentration and the risk of clinical failure, an illustration of confounding by indication. We focused on the assessment of the prognostic value of hormone therapy and prostate-specific antigens trajectory on the risk of failure. Methods We used a two-stage model within a Bayesian framework to assess the role of the prostate-specific antigens profile on clinical failure while accounting for a secondary treatment prescribed by indication. We modeled prostate-specific antigens using a hierarchical piecewise linear trajectory with a random changepoint. Residual prostate-specific antigens variability was expressed as a function of prostate-specific antigens concentration. Covariates in the survival model included hormone therapy, baseline characteristics, and individual predictions of the prostate-specific antigens nadir and timing and prostate-specific antigens slopes before and after the nadir as provided by the longitudinal process. Results We showed positive associations between an increased prostate-specific antigens nadir, an earlier changepoint and a steeper post-nadir slope with an increased risk of failure. Importantly, we highlighted a significant benefit of hormone therapy, an effect that was not observed when the prostate-specific antigens trajectory was not accounted for in the survival model. Conclusion Our modeling strategy was particularly flexible and accounted for multiple complex features of longitudinal and survival data, including the presence of a random changepoint and a time-dependent covariate.

  17. Coriolis-force-induced trajectory and endpoint deviations in the reaching movements of labyrinthine-defective subjects

    NASA Technical Reports Server (NTRS)

    DiZio, P.; Lackner, J. R.

    2001-01-01

    When reaching movements are made during passive constant velocity body rotation, inertial Coriolis accelerations are generated that displace both movement paths and endpoints in their direction. These findings directly contradict equilibrium point theories of movement control. However, it has been argued that these movement errors relate to subjects sensing their body rotation through continuing vestibular activity and making corrective movements. In the present study, we evaluated the reaching movements of five labyrinthine-defective subjects (lacking both semicircular canal and otolith function) who cannot sense passive body rotation in the dark and five age-matched, normal control subjects. Each pointed 40 times in complete darkness to the location of a just extinguished visual target before, during, and after constant velocity rotation at 10 rpm in the center of a fully enclosed slow rotation room. All subjects, including the normal controls, always felt completely stationary when making their movements. During rotation, both groups initially showed large deviations of their movement paths and endpoints in the direction of the transient Coriolis forces generated by their movements. With additional per-rotation movements, both groups showed complete adaptation of movement curvature (restoration of straight-line reaches) during rotation. The labyrinthine-defective subjects, however, failed to regain fully accurate movement endpoints after 40 reaches, unlike the control subjects who did so within 11 reaches. Postrotation, both groups' movements initially had mirror image curvatures to their initial per-rotation reaches; the endpoint aftereffects were significantly different from prerotation baseline for the control subjects but not for the labyrinthine-defective subjects reflecting the smaller amount of endpoint adaptation they achieved during rotation. The labyrinthine-defective subjects' movements had significantly lower peak velocity, higher peak elevation, lower terminal velocity, and a more vertical touchdown than those of the control subjects. Thus the way their reaches terminated denied them the somatosensory contact cues necessary for full endpoint adaptation. These findings fully contradict equilibrium point theories of movement control. They emphasize the importance of contact cues in adaptive movement control and indicate that movement errors generated by Coriolis perturbations of limb movements reveal characteristics of motor planning and adaptation in both healthy and clinical populations.

  18. On the distribution of interspecies correlation for Markov models of character evolution on Yule trees.

    PubMed

    Mulder, Willem H; Crawford, Forrest W

    2015-01-07

    Efforts to reconstruct phylogenetic trees and understand evolutionary processes depend fundamentally on stochastic models of speciation and mutation. The simplest continuous-time model for speciation in phylogenetic trees is the Yule process, in which new species are "born" from existing lineages at a constant rate. Recent work has illuminated some of the structural properties of Yule trees, but it remains mostly unknown how these properties affect sequence and trait patterns observed at the tips of the phylogenetic tree. Understanding the interplay between speciation and mutation under simple models of evolution is essential for deriving valid phylogenetic inference methods and gives insight into the optimal design of phylogenetic studies. In this work, we derive the probability distribution of interspecies covariance under Brownian motion and Ornstein-Uhlenbeck models of phenotypic change on a Yule tree. We compute the probability distribution of the number of mutations shared between two randomly chosen taxa in a Yule tree under discrete Markov mutation models. Our results suggest summary measures of phylogenetic information content, illuminate the correlation between site patterns in sequences or traits of related organisms, and provide heuristics for experimental design and reconstruction of phylogenetic trees. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Distributed memory parallel Markov random fields using graph partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinemann, C.; Perciano, T.; Ushizima, D.

    Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less

  20. Bayesian analysis of biogeography when the number of areas is large.

    PubMed

    Landis, Michael J; Matzke, Nicholas J; Moore, Brian R; Huelsenbeck, John P

    2013-11-01

    Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a "data-augmentation" approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea.

  1. Core body temperature as adjunct to endpoint determination in murine median lethal dose testing of rattlesnake venom.

    PubMed

    Cates, Charles C; McCabe, James G; Lawson, Gregory W; Couto, Marcelo A

    2014-12-01

    Median lethal dose (LD50) testing in mice is the 'gold standard' for evaluating the lethality of snake venoms and the effectiveness of interventions. As part of a study to determine the murine LD50 of the venom of 3 species of rattlesnake, temperature data were collected in an attempt to more precisely define humane endpoints. We used an 'up-and-down' methodology of estimating the LD50 that involved serial intraperitoneal injection of predetermined concentrations of venom. By using a rectal thermistor probe, body temperature was taken once before administration and at various times after venom exposure. All but one mouse showed a marked, immediate, dose-dependent drop in temperature of approximately 2 to 6°C at 15 to 45 min after administration. The lowest temperature sustained by any surviving mouse was 33.2°C. Surviving mice generally returned to near-baseline temperatures within 2 h after venom administration, whereas mice that did not survive continued to show a gradual decline in temperature until death or euthanasia. Logistic regression modeling controlling for the effects of baseline core body temperature and venom type showed that core body temperature was a significant predictor of survival. Linear regression of the interaction of time and survival was used to estimate temperatures predictive of death at the earliest time point and demonstrated that venom type had a significant influence on temperature values. Overall, our data suggest that core body temperature is a useful adjunct to monitoring for endpoints in LD50 studies and may be a valuable predictor of survival in venom studies.

  2. Root length of aquatic plant, Lemna minor L., as an optimal toxicity endpoint for biomonitoring of mining effluents.

    PubMed

    Gopalapillai, Yamini; Vigneault, Bernard; Hale, Beverley A

    2014-10-01

    Lemna minor, a free-floating macrophyte, is used for biomonitoring of mine effluent quality under the Metal Mining Effluent Regulations (MMER) of the Environmental Effects Monitoring (EEM) program in Canada and is known to be sensitive to trace metals commonly discharged in mine effluents such as Ni. Environment Canada's standard toxicity testing protocol recommends frond count (FC) and dry weight (DW) as the 2 required toxicity endpoints-this is similar to other major protocols such as those by the US Environmental Protection Agency (USEPA) and the Organisation for Economic Co-operation and Development (OECD)-that both require frond growth or biomass endpoints. However, we suggest that similar to terrestrial plants, average root length (RL) of aquatic plants will be an optimal and relevant endpoint. As expected, results demonstrate that RL is the ideal endpoint based on the 3 criteria: accuracy (i.e., toxicological sensitivity to contaminant), precision (i.e., lowest variance), and ecological relevance (metal mining effluents). Roots are known to play a major role in nutrient uptake in conditions of low nutrient conditions-thus having ecological relevance to freshwater from mining regions. Root length was the most sensitive and precise endpoint in this study where water chemistry varied greatly (pH and varying concentrations of Ca, Mg, Na, K, dissolved organic carbon, and an anthropogenic organic contaminant, sodium isopropyl xanthates) to match mining effluent ranges. Although frond count was a close second, dry weight proved to be an unreliable endpoint. We conclude that toxicity testing for the floating macrophyte should require average RL measurement as a primary endpoint. © 2014 SETAC.

  3. Evaluating the Toxicity of Cigarette Whole Smoke Solutions in an Air-Liquid-Interface Human In Vitro Airway Tissue Model.

    PubMed

    Cao, Xuefei; Muskhelishvili, Levan; Latendresse, John; Richter, Patricia; Heflich, Robert H

    2017-03-01

    Exposure to cigarette smoke causes a multitude of pathological changes leading to tissue damage and disease. Quantifying such changes in highly differentiated in vitro human tissue models may assist in evaluating the toxicity of tobacco products. In this methods development study, well-differentiated human air-liquid-interface (ALI) in vitro airway tissue models were used to assess toxicological endpoints relevant to tobacco smoke exposure. Whole mainstream smoke solutions (WSSs) were prepared from 2 commercial cigarettes (R60 and S60) that differ in smoke constituents when machine-smoked under International Organization for Standardization conditions. The airway tissue models were exposed apically to WSSs 4-h per day for 1-5 days. Cytotoxicity, tissue barrier integrity, oxidative stress, mucin secretion, and matrix metalloproteinase (MMP) excretion were measured. The treatments were not cytotoxic and had marginal effects on tissue barrier properties; however, other endpoints responded in time- and dose-dependent manners, with the R60 resulting in higher levels of response than the S60 for many endpoints. Based on the lowest effect dose, differences in response to the WSSs were observed for mucin induction and MMP secretion. Mitigation of mucin induction by cotreatment of cultures with N-acetylcysteine suggests that oxidative stress contributes to mucus hypersecretion. Overall, these preliminary results suggest that quantifying disease-relevant endpoints using ALI airway models is a potential tool for tobacco product toxicity evaluation. Additional research using tobacco samples generated under smoking machine conditions that more closely approximate human smoking patterns will inform further methods development. Published by Oxford University Press on behalf of the Society of Toxicology 2017. This work is written by US Government employees and is in the public domain in the US.

  4. Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Prix, R.

    2018-05-01

    Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.

  5. Cost-effectiveness of continuation maintenance pemetrexed after cisplatin and pemetrexed chemotherapy for advanced nonsquamous non-small-cell lung cancer: estimates from the perspective of the Chinese health care system.

    PubMed

    Zeng, Xiaohui; Peng, Liubao; Li, Jianhe; Chen, Gannong; Tan, Chongqing; Wang, Siying; Wan, Xiaomin; Ouyang, Lihui; Zhao, Ziying

    2013-01-01

    Continuation maintenance treatment with pemetrexed is approved by current clinical guidelines as a category 2A recommendation after induction therapy with cisplatin and pemetrexed chemotherapy (CP strategy) for patients with advanced nonsquamous non-small-cell lung cancer (NSCLC). However, the cost-effectiveness of the treatment remains unclear. We completed a trial-based assessment, from the perspective of the Chinese health care system, of the cost-effectiveness of maintenance pemetrexed treatment after a CP strategy for patients with advanced nonsquamous NSCLC. A Markov model was developed to estimate costs and benefits. It was based on a clinical trial that compared continuation maintenance pemetrexed therapy plus best supportive care (BSC) versus placebo plus BSC after a CP strategy for advanced nonsquamous NSCLC. Sensitivity analyses were conducted to assess the stability of the model. The model base case analysis suggested that continuation maintenance pemetrexed therapy after a CP strategy would increase benefits in a 1-, 2-, 5-, or 10-year time horizon, with incremental costs of $183,589.06, $126,353.16, $124,766.68, and $124,793.12 per quality-adjusted life-year gained, respectively. The most sensitive influential variable in the cost-effectiveness analysis was the utility of the progression-free survival state, followed by proportion of patients with postdiscontinuation therapy in both arms, proportion of BSC costs for PFS versus progressed survival state, and cost of pemetrexed. Probabilistic sensitivity analysis indicated that the cost-effective probability of adding continuation maintenance pemetrexed therapy to BSC was zero. One-way and probabilistic sensitivity analyses revealed that the Markov model was robust. Continuation maintenance of pemetrexed after a CP strategy for patients with advanced nonsquamous NSCLC is not cost-effective based on a recent clinical trial. Decreasing the price or adjusting the dosage of pemetrexed may be a better option for meeting the treatment demands of Chinese patients. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.

  6. Evaluating principal surrogate endpoints with time-to-event data accounting for time-varying treatment efficacy.

    PubMed

    Gabriel, Erin E; Gilbert, Peter B

    2014-04-01

    Principal surrogate (PS) endpoints are relatively inexpensive and easy to measure study outcomes that can be used to reliably predict treatment effects on clinical endpoints of interest. Few statistical methods for assessing the validity of potential PSs utilize time-to-event clinical endpoint information and to our knowledge none allow for the characterization of time-varying treatment effects. We introduce the time-dependent and surrogate-dependent treatment efficacy curve, ${\\mathrm {TE}}(t|s)$, and a new augmented trial design for assessing the quality of a biomarker as a PS. We propose a novel Weibull model and an estimated maximum likelihood method for estimation of the ${\\mathrm {TE}}(t|s)$ curve. We describe the operating characteristics of our methods via simulations. We analyze data from the Diabetes Control and Complications Trial, in which we find evidence of a biomarker with value as a PS.

  7. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  8. Stochastic nature of series of waiting times.

    PubMed

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H; Salehi, E; Behjat, E; Qorbani, M; Nezhad, M Khazaei; Zirak, M; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the "waiting times" series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  9. Performance Analyses and Improvements for the IEEE 802.15.4 CSMA/CA Scheme with Heterogeneous Buffered Conditions

    PubMed Central

    Zhu, Jianping; Tao, Zhengsu; Lv, Chunfeng

    2012-01-01

    Studies of the IEEE 802.15.4 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) scheme have been received considerable attention recently, with most of these studies focusing on homogeneous or saturated traffic. Two novel transmission schemes—OSTS/BSTS (One Service a Time Scheme/Bulk Service a Time Scheme)—are proposed in this paper to improve the behaviors of time-critical buffered networks with heterogeneous unsaturated traffic. First, we propose a model which contains two modified semi-Markov chains and a macro-Markov chain combined with the theory of M/G/1/K queues to evaluate the characteristics of these two improved CSMA/CA schemes, in which traffic arrivals and accessing packets are bestowed with non-preemptive priority over each other, instead of prioritization. Then, throughput, packet delay and energy consumption of unsaturated, unacknowledged IEEE 802.15.4 beacon-enabled networks are predicted based on the overall point of view which takes the dependent interactions of different types of nodes into account. Moreover, performance comparisons of these two schemes with other non-priority schemes are also proposed. Analysis and simulation results show that delay and fairness of our schemes are superior to those of other schemes, while throughput and energy efficiency are superior to others in more heterogeneous situations. Comprehensive simulations demonstrate that the analysis results of these models match well with the simulation results. PMID:22666076

  10. Performance analyses and improvements for the IEEE 802.15.4 CSMA/CA scheme with heterogeneous buffered conditions.

    PubMed

    Zhu, Jianping; Tao, Zhengsu; Lv, Chunfeng

    2012-01-01

    Studies of the IEEE 802.15.4 Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) scheme have been received considerable attention recently, with most of these studies focusing on homogeneous or saturated traffic. Two novel transmission schemes-OSTS/BSTS (One Service a Time Scheme/Bulk Service a Time Scheme)-are proposed in this paper to improve the behaviors of time-critical buffered networks with heterogeneous unsaturated traffic. First, we propose a model which contains two modified semi-Markov chains and a macro-Markov chain combined with the theory of M/G/1/K queues to evaluate the characteristics of these two improved CSMA/CA schemes, in which traffic arrivals and accessing packets are bestowed with non-preemptive priority over each other, instead of prioritization. Then, throughput, packet delay and energy consumption of unsaturated, unacknowledged IEEE 802.15.4 beacon-enabled networks are predicted based on the overall point of view which takes the dependent interactions of different types of nodes into account. Moreover, performance comparisons of these two schemes with other non-priority schemes are also proposed. Analysis and simulation results show that delay and fairness of our schemes are superior to those of other schemes, while throughput and energy efficiency are superior to others in more heterogeneous situations. Comprehensive simulations demonstrate that the analysis results of these models match well with the simulation results.

  11. Analysis of single ion channel data incorporating time-interval omission and sampling

    PubMed Central

    The, Yu-Kai; Timmer, Jens

    2005-01-01

    Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220

  12. Effects of Chronic Social Defeat Stress on Sleep and Circadian Rhythms Are Mitigated by Kappa-Opioid Receptor Antagonism

    PubMed Central

    Wells, Audrey M.; Ridener, Elysia; Kim, Woori; Carroll, F. Ivy; Cohen, Bruce M.

    2017-01-01

    Stress plays a critical role in the neurobiology of mood and anxiety disorders. Sleep and circadian rhythms are affected in many of these conditions. Here we examined the effects of chronic social defeat stress (CSDS), an ethological form of stress, on sleep and circadian rhythms. We exposed male mice implanted with wireless telemetry transmitters to a 10 day CSDS regimen known to produce anhedonia (a depressive-like effect) and social avoidance (an anxiety-like effect). EEG, EMG, body temperature, and locomotor activity data were collected continuously during the CSDS regimen and a 5 day recovery period. CSDS affected numerous endpoints, including paradoxical sleep (PS) and slow-wave sleep (SWS), as well as the circadian rhythmicity of body temperature and locomotor activity. The magnitude of the effects increased with repeated stress, and some changes (PS bouts, SWS time, body temperature, locomotor activity) persisted after the CSDS regimen had ended. CSDS also altered mRNA levels of the circadian rhythm-related gene mPer2 within brain areas that regulate motivation and emotion. Administration of the κ-opioid receptor (KOR) antagonist JDTic (30 mg/kg, i.p.) before CSDS reduced stress effects on both sleep and circadian rhythms, or hastened their recovery, and attenuated changes in mPer2. Our findings show that CSDS produces persistent disruptions in sleep and circadian rhythmicity, mimicking attributes of stress-related conditions as they appear in humans. The ability of KOR antagonists to mitigate these disruptions is consistent with previously reported antistress effects. Studying homologous endpoints across species may facilitate the development of improved treatments for psychiatric illness. SIGNIFICANCE STATEMENT Stress plays a critical role in the neurobiology of mood and anxiety disorders. We show that chronic social defeat stress in mice produces progressive alterations in sleep and circadian rhythms that resemble features of depression as it appears in humans. Whereas some of these alterations recover quickly upon cessation of stress, others persist. Administration of a kappa-opioid receptor (KOR) antagonist reduced stress effects or hastened recovery, consistent with the previously reported antistress effects of this class of agents. Use of endpoints, such as sleep and circadian rhythm, that are homologous across species will facilitate the implementation of translational studies that better predict clinical outcomes in humans, improve the success of clinical trials, and facilitate the development of more effective therapeutics. PMID:28674176

  13. Randomized Controlled Trials to Define Viral Load Thresholds for Cytomegalovirus Pre-Emptive Therapy

    PubMed Central

    Griffiths, Paul D.; Rothwell, Emily; Raza, Mohammed; Wilmore, Stephanie; Doyle, Tomas; Harber, Mark; O’Beirne, James; Mackinnon, Stephen; Jones, Gareth; Thorburn, Douglas; Mattes, Frank; Nebbia, Gaia; Atabani, Sowsan; Smith, Colette; Stanton, Anna; Emery, Vincent C.

    2016-01-01

    Background To help decide when to start and when to stop pre-emptive therapy for cytomegalovirus infection, we conducted two open-label randomized controlled trials in renal, liver and bone marrow transplant recipients in a single centre where pre-emptive therapy is indicated if viraemia exceeds 3000 genomes/ml (2520 IU/ml) of whole blood. Methods Patients with two consecutive viraemia episodes each below 3000 genomes/ml were randomized to continue monitoring or to immediate treatment (Part A). A separate group of patients with viral load greater than 3000 genomes/ml was randomized to stop pre-emptive therapy when two consecutive levels less than 200 genomes/ml (168 IU/ml) or less than 3000 genomes/ml were obtained (Part B). For both parts, the primary endpoint was the occurrence of a separate episode of viraemia requiring treatment because it was greater than 3000 genomes/ml. Results In Part A, the primary endpoint was not significantly different between the two arms; 18/32 (56%) in the monitor arm had viraemia greater than 3000 genomes/ml compared to 10/27 (37%) in the immediate treatment arm (p = 0.193). However, the time to developing an episode of viraemia greater than 3000 genomes/ml was significantly delayed among those randomized to immediate treatment (p = 0.022). In Part B, the primary endpoint was not significantly different between the two arms; 19/55 (35%) in the less than 200 genomes/ml arm subsequently had viraemia greater than 3000 genomes/ml compared to 23/51 (45%) among those randomized to stop treatment in the less than 3000 genomes/ml arm (p = 0.322). However, the duration of antiviral treatment was significantly shorter (p = 0.0012) in those randomized to stop treatment when viraemia was less than 3000 genomes/ml. Discussion The results illustrate that patients have continuing risks for CMV infection with limited time available for intervention. We see no need to alter current rules for stopping or starting pre-emptive therapy. PMID:27684379

  14. Assessing significance in a Markov chain without mixing.

    PubMed

    Chikina, Maria; Frieze, Alan; Pegden, Wesley

    2017-03-14

    We present a statistical test to detect that a presented state of a reversible Markov chain was not chosen from a stationary distribution. In particular, given a value function for the states of the Markov chain, we would like to show rigorously that the presented state is an outlier with respect to the values, by establishing a [Formula: see text] value under the null hypothesis that it was chosen from a stationary distribution of the chain. A simple heuristic used in practice is to sample ranks of states from long random trajectories on the Markov chain and compare these with the rank of the presented state; if the presented state is a [Formula: see text] outlier compared with the sampled ranks (its rank is in the bottom [Formula: see text] of sampled ranks), then this observation should correspond to a [Formula: see text] value of [Formula: see text] This significance is not rigorous, however, without good bounds on the mixing time of the Markov chain. Our test is the following: Given the presented state in the Markov chain, take a random walk from the presented state for any number of steps. We prove that observing that the presented state is an [Formula: see text]-outlier on the walk is significant at [Formula: see text] under the null hypothesis that the state was chosen from a stationary distribution. We assume nothing about the Markov chain beyond reversibility and show that significance at [Formula: see text] is best possible in general. We illustrate the use of our test with a potential application to the rigorous detection of gerrymandering in Congressional districting.

  15. Assessing significance in a Markov chain without mixing

    PubMed Central

    Chikina, Maria; Frieze, Alan; Pegden, Wesley

    2017-01-01

    We present a statistical test to detect that a presented state of a reversible Markov chain was not chosen from a stationary distribution. In particular, given a value function for the states of the Markov chain, we would like to show rigorously that the presented state is an outlier with respect to the values, by establishing a p value under the null hypothesis that it was chosen from a stationary distribution of the chain. A simple heuristic used in practice is to sample ranks of states from long random trajectories on the Markov chain and compare these with the rank of the presented state; if the presented state is a 0.1% outlier compared with the sampled ranks (its rank is in the bottom 0.1% of sampled ranks), then this observation should correspond to a p value of 0.001. This significance is not rigorous, however, without good bounds on the mixing time of the Markov chain. Our test is the following: Given the presented state in the Markov chain, take a random walk from the presented state for any number of steps. We prove that observing that the presented state is an ε-outlier on the walk is significant at p=2ε under the null hypothesis that the state was chosen from a stationary distribution. We assume nothing about the Markov chain beyond reversibility and show that significance at p≈ε is best possible in general. We illustrate the use of our test with a potential application to the rigorous detection of gerrymandering in Congressional districting. PMID:28246331

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Intravaia, F.; Behunin, R. O.; Henkel, C.

    Here, we discuss the failure of the Markov approximation in the description of atom-surface fluctuation-induced interactions, both in equilibrium (Casimir-Polder forces) and out of equilibrium (quantum friction). Using general theoretical arguments, we show that the Markov approximation can lead to erroneous predictions of such phenomena with regard to both strength and functional dependencies on system parameters. Particularly, we show that the long-time power-law tails of two-time dipole correlations and their corresponding low-frequency behavior, neglected in the Markovian limit, affect the prediction of the force. These findings highlight the importance of non-Markovian effects in dispersion interactions.

  17. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  18. Stochastic nature of series of waiting times

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H.; Salehi, E.; Behjat, E.; Qorbani, M.; Khazaei Nezhad, M.; Zirak, M.; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M. Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the “waiting times” series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  19. Ascertainment-adjusted parameter estimation approach to improve robustness against misspecification of health monitoring methods

    NASA Astrophysics Data System (ADS)

    Juesas, P.; Ramasso, E.

    2016-12-01

    Condition monitoring aims at ensuring system safety which is a fundamental requirement for industrial applications and that has become an inescapable social demand. This objective is attained by instrumenting the system and developing data analytics methods such as statistical models able to turn data into relevant knowledge. One difficulty is to be able to correctly estimate the parameters of those methods based on time-series data. This paper suggests the use of the Weighted Distribution Theory together with the Expectation-Maximization algorithm to improve parameter estimation in statistical models with latent variables with an application to health monotonic under uncertainty. The improvement of estimates is made possible by incorporating uncertain and possibly noisy prior knowledge on latent variables in a sound manner. The latent variables are exploited to build a degradation model of dynamical system represented as a sequence of discrete states. Examples on Gaussian Mixture Models, Hidden Markov Models (HMM) with discrete and continuous outputs are presented on both simulated data and benchmarks using the turbofan engine datasets. A focus on the application of a discrete HMM to health monitoring under uncertainty allows to emphasize the interest of the proposed approach in presence of different operating conditions and fault modes. It is shown that the proposed model depicts high robustness in presence of noisy and uncertain prior.

  20. Limiting Distributions of Functionals of Markov Chains.

    DTIC Science & Technology

    1984-08-01

    limiting distributions; periodic * nonhomoger.,!ous Poisson processes . 19 ANS? MACY IConuui oe nonoe’ee if necorglooy and edern thty by block numbers...homogeneous Poisson processes is of interest in itself. The problem considered in this paper is of interest in the theory of partially observable...where we obtain the limiting distribution of the interevent times. Key Words: Markov Chains, Limiting Distributions, Periodic Nonhomogeneous Poisson

Top