Building Higher-Order Markov Chain Models with EXCEL
ERIC Educational Resources Information Center
Ching, Wai-Ki; Fung, Eric S.; Ng, Michael K.
2004-01-01
Categorical data sequences occur in many applications such as forecasting, data mining and bioinformatics. In this note, we present higher-order Markov chain models for modelling categorical data sequences with an efficient algorithm for solving the model parameters. The algorithm can be implemented easily in a Microsoft EXCEL worksheet. We give a…
Predictive Rate-Distortion for Infinite-Order Markov Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2016-06-01
Predictive rate-distortion analysis suffers from the curse of dimensionality: clustering arbitrarily long pasts to retain information about arbitrarily long futures requires resources that typically grow exponentially with length. The challenge is compounded for infinite-order Markov processes, since conditioning on finite sequences cannot capture all of their past dependencies. Spectral arguments confirm a popular intuition: algorithms that cluster finite-length sequences fail dramatically when the underlying process has long-range temporal correlations and can fail even for processes generated by finite-memory hidden Markov models. We circumvent the curse of dimensionality in rate-distortion analysis of finite- and infinite-order processes by casting predictive rate-distortion objective functions in terms of the forward- and reverse-time causal states of computational mechanics. Examples demonstrate that the resulting algorithms yield substantial improvements.
Predictive Rate-Distortion for Infinite-Order Markov Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2016-05-01
Predictive rate-distortion analysis suffers from the curse of dimensionality: clustering arbitrarily long pasts to retain information about arbitrarily long futures requires resources that typically grow exponentially with length. The challenge is compounded for infinite-order Markov processes, since conditioning on finite sequences cannot capture all of their past dependencies. Spectral arguments confirm a popular intuition: algorithms that cluster finite-length sequences fail dramatically when the underlying process has long-range temporal correlations and can fail even for processes generated by finite-memory hidden Markov models. We circumvent the curse of dimensionality in rate-distortion analysis of finite- and infinite-order processes by casting predictive rate-distortion objective functions in terms of the forward- and reverse-time causal states of computational mechanics. Examples demonstrate that the resulting algorithms yield substantial improvements.
State space orderings for Gauss-Seidel in Markov chains revisited
Dayar, T.
1996-12-31
Symmetric state space orderings of a Markov chain may be used to reduce the magnitude of the subdominant eigenvalue of the (Gauss-Seidel) iteration matrix. Orderings that maximize the elemental mass or the number of nonzero elements in the dominant term of the Gauss-Seidel splitting (that is, the term approximating the coefficient matrix) do not necessarily converge faster. An ordering of a Markov chain that satisfies Property-R is semi-convergent. On the other hand, there are semi-convergent symmetric state space orderings that do not satisfy Property-R. For a given ordering, a simple approach for checking Property-R is shown. An algorithm that orders the states of a Markov chain so as to increase the likelihood of satisfying Property-R is presented. The computational complexity of the ordering algorithm is less than that of a single Gauss-Seidel iteration (for sparse matrices). In doing all this, the aim is to gain an insight for faster converging orderings. Results from a variety of applications improve the confidence in the algorithm.
Kinetics and thermodynamics of first-order Markov chain copolymerization
NASA Astrophysics Data System (ADS)
Gaspard, P.; Andrieux, D.
2014-07-01
We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer.
Kinetics and thermodynamics of first-order Markov chain copolymerization.
Gaspard, P; Andrieux, D
2014-07-28
We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer. PMID:25084957
Kinetics and thermodynamics of first-order Markov chain copolymerization
Gaspard, P.; Andrieux, D.
2014-07-28
We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer.
First and second order semi-Markov chains for wind speed modeling
NASA Astrophysics Data System (ADS)
Prattico, F.; Petroni, F.; D'Amico, G.
2012-04-01
Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. Semi-Markov processes (SMP) are a wide class of stochastic processes which generalize at the same time both Markov chains and renewal processes. Their main advantage is that of using whatever type of waiting time distribution for modeling the time to have a transition from one state to another one. This major flexibility has a price to pay: availability of data to estimate the parameters of the model which are more numerous. Data availability is not an issue in wind speed studies, therefore, semi-Markov models can be used in a statistical efficient way. In this work we present three different semi-Markov chain models: the first one is a first-order SMP where the transition probabilities from two speed states (at time Tn and Tn-1) depend on the initial state (the state at Tn-1), final state (the state at Tn) and on the waiting time (given by t=Tn-Tn-1), the second model is a second order SMP where we consider the transition probabilities as depending also on the state the wind speed was before the initial state (which is the state at Tn-2) and the last one is still a second order SMP where the transition probabilities depends on the three states at Tn-2,Tn-1 and Tn and on the waiting times t_1=Tn-1-Tn-2 and t_2=Tn-Tn-1. The three models are used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling
Multiple pattern matching: a Markov chain approach.
Lladser, Manuel E; Betterton, M D; Knight, Rob
2008-01-01
RNA motifs typically consist of short, modular patterns that include base pairs formed within and between modules. Estimating the abundance of these patterns is of fundamental importance for assessing the statistical significance of matches in genomewide searches, and for predicting whether a given function has evolved many times in different species or arose from a single common ancestor. In this manuscript, we review in an integrated and self-contained manner some basic concepts of automata theory, generating functions and transfer matrix methods that are relevant to pattern analysis in biological sequences. We formalize, in a general framework, the concept of Markov chain embedding to analyze patterns in random strings produced by a memoryless source. This conceptualization, together with the capability of automata to recognize complicated patterns, allows a systematic analysis of problems related to the occurrence and frequency of patterns in random strings. The applications we present focus on the concept of synchronization of automata, as well as automata used to search for a finite number of keywords (including sets of patterns generated according to base pairing rules) in a general text. PMID:17668213
Post processing with first- and second-order hidden Markov models
NASA Astrophysics Data System (ADS)
Taghva, Kazem; Poudel, Srijana; Malreddy, Spandana
2013-01-01
In this paper, we present the implementation and evaluation of first order and second order Hidden Markov Models to identify and correct OCR errors in the post processing of books. Our experiments show that the first order model approximately corrects 10% of the errors with 100% precision, while the second order model corrects a higher percentage of errors with much lower precision.
Lee, Lee-Min; Jean, Fu-Rong
2016-08-01
The hidden Markov models have been widely applied to systems with sequential data. However, the conditional independence of the state outputs will limit the output of a hidden Markov model to be a piecewise constant random sequence, which is not a good approximation for many real processes. In this paper, a high-order hidden Markov model for piecewise linear processes is proposed to better approximate the behavior of a real process. A parameter estimation method based on the expectation-maximization algorithm was derived for the proposed model. Experiments on speech recognition of noisy Mandarin digits were conducted to examine the effectiveness of the proposed method. Experimental results show that the proposed method can reduce the recognition error rate compared to a baseline hidden Markov model. PMID:27586781
Fitting optimum order of Markov chain models for daily rainfall occurrences in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Deni, Sayang Mohd; Jemain, Abdul Aziz; Ibrahim, Kamarulzaman
2009-06-01
The analysis of the daily rainfall occurrence behavior is becoming more important, particularly in water-related sectors. Many studies have identified a more comprehensive pattern of the daily rainfall behavior based on the Markov chain models. One of the aims in fitting the Markov chain models of various orders to the daily rainfall occurrence is to determine the optimum order. In this study, the optimum order of the Markov chain models for a 5-day sequence will be examined in each of the 18 rainfall stations in Peninsular Malaysia, which have been selected based on the availability of the data, using the Akaike’s (AIC) and Bayesian information criteria (BIC). The identification of the most appropriate order in describing the distribution of the wet (dry) spells for each of the rainfall stations is obtained using the Kolmogorov-Smirnov goodness-of-fit test. It is found that the optimum order varies according to the levels of threshold used (e.g., either 0.1 or 10.0 mm), the locations of the region and the types of monsoon seasons. At most stations, the Markov chain models of a higher order are found to be optimum for rainfall occurrence during the northeast monsoon season for both levels of threshold. However, it is generally found that regardless of the monsoon seasons, the first-order model is optimum for the northwestern and eastern regions of the peninsula when the level of thresholds of 10.0 mm is considered. The analysis indicates that the first order of the Markov chain model is found to be most appropriate for describing the distribution of wet spells, whereas the higher-order models are found to be adequate for the dry spells in most of the rainfall stations for both threshold levels and monsoon seasons.
Heisenberg picture approach to the stability of quantum Markov systems
Pan, Yu E-mail: zibo.miao@anu.edu.au; Miao, Zibo E-mail: zibo.miao@anu.edu.au; Amini, Hadis; Gough, John; Ugrinovskii, Valery; James, Matthew R.
2014-06-15
Quantum Markovian systems, modeled as unitary dilations in the quantum stochastic calculus of Hudson and Parthasarathy, have become standard in current quantum technological applications. This paper investigates the stability theory of such systems. Lyapunov-type conditions in the Heisenberg picture are derived in order to stabilize the evolution of system operators as well as the underlying dynamics of the quantum states. In particular, using the quantum Markov semigroup associated with this quantum stochastic differential equation, we derive sufficient conditions for the existence and stability of a unique and faithful invariant quantum state. Furthermore, this paper proves the quantum invariance principle, which extends the LaSalle invariance principle to quantum systems in the Heisenberg picture. These results are formulated in terms of algebraic constraints suitable for engineering quantum systems that are used in coherent feedback networks.
Using higher-order Markov models to reveal flow-based communities in networks
NASA Astrophysics Data System (ADS)
Salnikov, Vsevolod; Schaub, Michael T.; Lambiotte, Renaud
2016-03-01
Complex systems made of interacting elements are commonly abstracted as networks, in which nodes are associated with dynamic state variables, whose evolution is driven by interactions mediated by the edges. Markov processes have been the prevailing paradigm to model such a network-based dynamics, for instance in the form of random walks or other types of diffusions. Despite the success of this modelling perspective for numerous applications, it represents an over-simplification of several real-world systems. Importantly, simple Markov models lack memory in their dynamics, an assumption often not realistic in practice. Here, we explore possibilities to enrich the system description by means of second-order Markov models, exploiting empirical pathway information. We focus on the problem of community detection and show that standard network algorithms can be generalized in order to extract novel temporal information about the system under investigation. We also apply our methodology to temporal networks, where we can uncover communities shaped by the temporal correlations in the system. Finally, we discuss relations of the framework of second order Markov processes and the recently proposed formalism of using non-backtracking matrices for community detection.
Using higher-order Markov models to reveal flow-based communities in networks
Salnikov, Vsevolod; Schaub, Michael T.; Lambiotte, Renaud
2016-01-01
Complex systems made of interacting elements are commonly abstracted as networks, in which nodes are associated with dynamic state variables, whose evolution is driven by interactions mediated by the edges. Markov processes have been the prevailing paradigm to model such a network-based dynamics, for instance in the form of random walks or other types of diffusions. Despite the success of this modelling perspective for numerous applications, it represents an over-simplification of several real-world systems. Importantly, simple Markov models lack memory in their dynamics, an assumption often not realistic in practice. Here, we explore possibilities to enrich the system description by means of second-order Markov models, exploiting empirical pathway information. We focus on the problem of community detection and show that standard network algorithms can be generalized in order to extract novel temporal information about the system under investigation. We also apply our methodology to temporal networks, where we can uncover communities shaped by the temporal correlations in the system. Finally, we discuss relations of the framework of second order Markov processes and the recently proposed formalism of using non-backtracking matrices for community detection. PMID:27029508
Using higher-order Markov models to reveal flow-based communities in networks.
Salnikov, Vsevolod; Schaub, Michael T; Lambiotte, Renaud
2016-01-01
Complex systems made of interacting elements are commonly abstracted as networks, in which nodes are associated with dynamic state variables, whose evolution is driven by interactions mediated by the edges. Markov processes have been the prevailing paradigm to model such a network-based dynamics, for instance in the form of random walks or other types of diffusions. Despite the success of this modelling perspective for numerous applications, it represents an over-simplification of several real-world systems. Importantly, simple Markov models lack memory in their dynamics, an assumption often not realistic in practice. Here, we explore possibilities to enrich the system description by means of second-order Markov models, exploiting empirical pathway information. We focus on the problem of community detection and show that standard network algorithms can be generalized in order to extract novel temporal information about the system under investigation. We also apply our methodology to temporal networks, where we can uncover communities shaped by the temporal correlations in the system. Finally, we discuss relations of the framework of second order Markov processes and the recently proposed formalism of using non-backtracking matrices for community detection. PMID:27029508
The optimum order of a Markov chain model for daily rainfall in Nigeria
NASA Astrophysics Data System (ADS)
Jimoh, O. D.; Webster, P.
1996-11-01
Markov type models are often used to describe the occurrence of daily rainfall. Although models of Order 1 have been successfully employed, there remains uncertainty concerning the optimum order for such models. This paper is concerned with estimation of the optimum order of Markov chains and, in particular, the use of objective criteria of the Akaike and Bayesian Information Criteria (AIC and BIC, respectively). Using daily rainfall series for five stations in Nigeria, it has been found that the AIC and BIC estimates vary with month as well as the value of the rainfall threshold used to define a wet day. There is no apparent system to this variation, although AIC estimates are consistently greater than or equal to BIC estimates, with values of the latter limited to zero or unity. The optimum order is also investigated through generation of synthetic sequences of wet and dry days using the transition matrices of zero-, first- and second-order Markov chains. It was found that the first-order model is superior to the zero-order model in representing the characteristics of the historical sequence as judged using frequency duration curves. There was no discernible difference between the model performance for first- and second-order models. There was no seasonal varation in the model performance, which contrasts with the optimum models identified using AIC and BIC estimates. It is concluded that caution is needed with the use of objective criteria for determining the optimum order of the Markov model and that the use of frequency duration curves can provide a robust alternative method of model identification. Comments are also made on the importance of record length and non-stationarity for model identification
A new approach to simulating stream isotope dynamics using Markov switching autoregressive models
NASA Astrophysics Data System (ADS)
Birkel, Christian; Paroli, Roberta; Spezia, Luigi; Dunn, Sarah M.; Tetzlaff, Doerthe; Soulsby, Chris
2012-09-01
In this study we applied Markov switching autoregressive models (MSARMs) as a proof-of-concept to analyze the temporal dynamics and statistical characteristics of the time series of two conservative water isotopes, deuterium (δ2H) and oxygen-18 (δ18O), in daily stream water samples over two years in a small catchment in eastern Scotland. MSARMs enabled us to explicitly account for the identified non-linear, non-Normal and non-stationary isotope dynamics of both time series. The hidden states of the Markov chain could also be associated with meteorological and hydrological drivers identifying the short (event) and longer-term (inter-event) transport mechanisms for both isotopes. Inference was based on the Bayesian approach performed through Markov Chain Monte Carlo algorithms, which also allowed us to deal with a high rate of missing values (17%). Although it is usually assumed that both isotopes are conservative and exhibit similar dynamics, δ18O showed somewhat different time series characteristics. Both isotopes were best modelled with two hidden states, but δ18O demanded autoregressions of the first order, whereas δ2H of the second. Moreover, both the dynamics of observations and the hidden states of the two isotopes were explained by two different sets of covariates. Consequently use of the two tracers for transit time modelling and hydrograph separation may result in different interpretations on the functioning of a catchment system.
Meeting the NICE requirements: a Markov model approach.
Mauskopf, J
2000-01-01
The National Institute of Clinical Excellence (NICE) was established in the United Kingdom in April 1999 to issue guidance for the National Health Service (NHS) on the use of selective new health care interventions. This article describes the NICE requirements for both incidence-based cost-effectiveness analyses and prevalence-based estimates of the aggregate NHS impact of the new drug. The article demonstrates how both of these requirements can be met using Markov modeling techniques. A Markov model for a hypothetical new treatment for HIV infection is used as an illustration of how to generate the estimates that are required by NICE. The article concludes with a discussion of the difficulties of obtaining data of sufficient quality to include in the Markov model to ensure that the submission meets all the NICE requirements and is credible to the NICE advisory board. PMID:16464193
NASA Astrophysics Data System (ADS)
Vaglica, Gabriella; Lillo, Fabrizio; Mantegna, Rosario N.
2010-07-01
Large trades in a financial market are usually split into smaller parts and traded incrementally over extended periods of time. We address these large trades as hidden orders. In order to identify and characterize hidden orders, we fit hidden Markov models to the time series of the sign of the tick-by-tick inventory variation of market members of the Spanish Stock Exchange. Our methodology probabilistically detects trading sequences, which are characterized by a significant majority of buy or sell transactions. We interpret these patches of sequential buying or selling transactions as proxies of the traded hidden orders. We find that the time, volume and number of transaction size distributions of these patches are fat tailed. Long patches are characterized by a large fraction of market orders and a low participation rate, while short patches have a large fraction of limit orders and a high participation rate. We observe the existence of a buy-sell asymmetry in the number, average length, average fraction of market orders and average participation rate of the detected patches. The detected asymmetry is clearly dependent on the local market trend. We also compare the hidden Markov model patches with those obtained with the segmentation method used in Vaglica et al (2008 Phys. Rev. E 77 036110), and we conclude that the former ones can be interpreted as a partition of the latter ones.
A graph theoretic approach to global earthquake sequencing: A Markov chain model
NASA Astrophysics Data System (ADS)
Vasudevan, K.; Cavers, M. S.
2012-12-01
We construct a directed graph to represent a Markov chain of global earthquake sequences and analyze the statistics of transition probabilities linked to earthquake zones. For earthquake zonation, we consider the simplified plate boundary template of Kagan, Bird, and Jackson (KBJ template, 2010). We demonstrate the applicability of the directed graph approach to hazard-related forecasting using some of the properties of graphs that represent the finite Markov chain. We extend the present study to consider Bird's 52-plate zonation (2003) describing the global earthquakes at and within plate boundaries to gain further insight into the usefulness of digraphs corresponding to a Markov chain model.
A clustering approach for estimating parameters of a profile hidden Markov model.
Aghdam, Rosa; Pezeshk, Hamid; Malekpour, Seyed Amir; Shemehsavar, Soudabeh; Eslahchi, Changiz
2013-01-01
A Profile Hidden Markov Model (PHMM) is a standard form of a Hidden Markov Models used for modeling protein and DNA sequence families based on multiple alignment. In this paper, we implement Baum-Welch algorithm and the Bayesian Monte Carlo Markov Chain (BMCMC) method for estimating parameters of small artificial PHMM. In order to improve the prediction accuracy of the estimation of the parameters of the PHMM, we classify the training data using the weighted values of sequences in the PHMM then apply an algorithm for estimating parameters of the PHMM. The results show that the BMCMC method performs better than the Maximum Likelihood estimation. PMID:23865165
A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes
NASA Technical Reports Server (NTRS)
Carpenter, Russell; Lee, Taesul
2008-01-01
Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.
A Hidden Markov Approach to Modeling Interevent Earthquake Times
NASA Astrophysics Data System (ADS)
Chambers, D.; Ebel, J. E.; Kafka, A. L.; Baglivo, J.
2003-12-01
A hidden Markov process, in which the interevent time distribution is a mixture of exponential distributions with different rates, is explored as a model for seismicity that does not follow a Poisson process. In a general hidden Markov model, one assumes that a system can be in any of a finite number k of states and there is a random variable of interest whose distribution depends on the state in which the system resides. The system moves probabilistically among the states according to a Markov chain; that is, given the history of visited states up to the present, the conditional probability that the next state is a specified one depends only on the present state. Thus the transition probabilities are specified by a k by k stochastic matrix. Furthermore, it is assumed that the actual states are unobserved (hidden) and that only the values of the random variable are seen. From these values, one wishes to estimate the sequence of states, the transition probability matrix, and any parameters used in the state-specific distributions. The hidden Markov process was applied to a data set of 110 interevent times for earthquakes in New England from 1975 to 2000. Using the Baum-Welch method (Baum et al., Ann. Math. Statist. 41, 164-171), we estimate the transition probabilities, find the most likely sequence of states, and estimate the k means of the exponential distributions. Using k=2 states, we found the data were fit well by a mixture of two exponential distributions, with means of approximately 5 days and 95 days. The steady state model indicates that after approximately one fourth of the earthquakes, the waiting time until the next event had the first exponential distribution and three fourths of the time it had the second. Three and four state models were also fit to the data; the data were inconsistent with a three state model but were well fit by a four state model.
NASA Astrophysics Data System (ADS)
Schreiber, Tomasz
2010-08-01
We consider polygonal Markov fields originally introduced by Arak in 4th USSR-Japan Symposium on Probability Theory and Mathematical Statistics, Abstracts of Communications, 1982; Arak and Surgailis in Probab. Theory Relat. Fields 80:543-579, 1989. Our attention is focused on fields with nodes of order two, which can be regarded as continuum ensembles of non-intersecting contours in the plane, sharing a number of salient features with the two-dimensional Ising model. The purpose of this paper is to establish an explicit stochastic representation for the higher-order correlation functions of polygonal Markov fields in their consistency regime. The representation is given in terms of the so-called crop functionals (defined by a Möbius-type formula) of polygonal webs which arise in a graphical construction dual to that giving rise to polygonal fields. The proof of our representation formula goes by constructing a martingale interpolation between the correlation functions of polygonal fields and crop functionals of polygonal webs.
Bartolucci, Francesco; Farcomeni, Alessio
2015-03-01
Mixed latent Markov (MLM) models represent an important tool of analysis of longitudinal data when response variables are affected by time-fixed and time-varying unobserved heterogeneity, in which the latter is accounted for by a hidden Markov chain. In order to avoid bias when using a model of this type in the presence of informative drop-out, we propose an event-history (EH) extension of the latent Markov approach that may be used with multivariate longitudinal data, in which one or more outcomes of a different nature are observed at each time occasion. The EH component of the resulting model is referred to the interval-censored drop-out, and bias in MLM modeling is avoided by correlated random effects, included in the different model components, which follow common latent distributions. In order to perform maximum likelihood estimation of the proposed model by the expectation-maximization algorithm, we extend the usual forward-backward recursions of Baum and Welch. The algorithm has the same complexity as the one adopted in cases of non-informative drop-out. We illustrate the proposed approach through simulations and an application based on data coming from a medical study about primary biliary cirrhosis in which there are two outcomes of interest, one continuous and the other binary. PMID:25227970
Partially ordered mixed hidden Markov model for the disablement process of older adults
Ip, Edward H.; Zhang, Qiang; Rejeski, W. Jack; Harris, Tamara B.; Kritchevsky, Stephen
2013-01-01
At both the individual and societal levels, the health and economic burden of disability in older adults is enormous in developed countries, including the U.S. Recent studies have revealed that the disablement process in older adults often comprises episodic periods of impaired functioning and periods that are relatively free of disability, amid a secular and natural trend of decline in functioning. Rather than an irreversible, progressive event that is analogous to a chronic disease, disability is better conceptualized and mathematically modeled as states that do not necessarily follow a strict linear order of good-to-bad. Statistical tools, including Markov models, which allow bidirectional transition between states, and random effects models, which allow individual-specific rate of secular decline, are pertinent. In this paper, we propose a mixed effects, multivariate, hidden Markov model to handle partially ordered disability states. The model generalizes the continuation ratio model for ordinal data in the generalized linear model literature and provides a formal framework for testing the effects of risk factors and/or an intervention on the transitions between different disability states. Under a generalization of the proportional odds ratio assumption, the proposed model circumvents the problem of a potentially large number of parameters when the number of states and the number of covariates are substantial. We describe a maximum likelihood method for estimating the partially ordered, mixed effects model and show how the model can be applied to a longitudinal data set that consists of N = 2,903 older adults followed for 10 years in the Health Aging and Body Composition Study. We further statistically test the effects of various risk factors upon the probabilities of transition into various severe disability states. The result can be used to inform geriatric and public health science researchers who study the disablement process. PMID:24058222
A Markov random field approach for microstructure synthesis
NASA Astrophysics Data System (ADS)
Kumar, A.; Nguyen, L.; DeGraef, M.; Sundararaghavan, V.
2016-03-01
We test the notion that many microstructures have an underlying stationary probability distribution. The stationary probability distribution is ubiquitous: we know that different windows taken from a polycrystalline microstructure are generally ‘statistically similar’. To enable computation of such a probability distribution, microstructures are represented in the form of undirected probabilistic graphs called Markov Random Fields (MRFs). In the model, pixels take up integer or vector states and interact with multiple neighbors over a window. Using this lattice structure, algorithms are developed to sample the conditional probability density for the state of each pixel given the known states of its neighboring pixels. The sampling is performed using reference experimental images. 2D microstructures are artificially synthesized using the sampled probabilities. Statistical features such as grain size distribution and autocorrelation functions closely match with those of the experimental images. The mechanical properties of the synthesized microstructures were computed using the finite element method and were also found to match the experimental values.
A Hypergraph-Based Reduction for Higher-Order Binary Markov Random Fields.
Fix, Alexander; Gruber, Aritanan; Boros, Endre; Zabih, Ramin
2015-07-01
Higher-order Markov Random Fields, which can capture important properties of natural images, have become increasingly important in computer vision. While graph cuts work well for first-order MRF's, until recently they have rarely been effective for higher-order MRF's. Ishikawa's graph cut technique [1], [2] shows great promise for many higher-order MRF's. His method transforms an arbitrary higher-order MRF with binary labels into a first-order one with the same minima. If all the terms are submodular the exact solution can be easily found; otherwise, pseudoboolean optimization techniques can produce an optimal labeling for a subset of the variables. We present a new transformation with better performance than [1], [2], both theoretically and experimentally. While [1], [2] transforms each higher-order term independently, we use the underlying hypergraph structure of the MRF to transform a group of terms at once. For n binary variables, each of which appears in terms with k other variables, at worst we produce n non-submodular terms, while [1], [2] produces O(nk). We identify a local completeness property under which our method perform even better, and show that under certain assumptions several important vision problems (including common variants of fusion moves) have this property. We show experimentally that our method produces smaller weight of non-submodular edges, and that this metric is directly related to the effectiveness of QPBO [3]. Running on the same field of experts dataset used in [1], [2] we optimally label significantly more variables (96 versus 80 percent) and converge more rapidly to a lower energy. Preliminary experiments suggest that some other higher-order MRF's used in stereo [4] and segmentation [5] are also locally complete and would thus benefit from our work. PMID:26352447
Markov-chain approach to the distribution of ancestors in species of biparental reproduction
NASA Astrophysics Data System (ADS)
Caruso, M.; Jarne, C.
2014-08-01
We studied how to obtain a distribution for the number of ancestors in species of sexual reproduction. Present models concentrate on the estimation of distributions repetitions of ancestors in genealogical trees. It has been shown that it is not possible to reconstruct the genealogical history of each species along all its generations by means of a geometric progression. This analysis demonstrates that it is possible to rebuild the tree of progenitors by modeling the problem with a Markov chain. For each generation, the maximum number of possible ancestors is different. This presents huge problems for the resolution. We found a solution through a dilation of the sample space, although the distribution defined there takes smaller values with respect to the initial problem. In order to correct the distribution for each generation, we introduced the invariance under a gauge (local) group of dilations. These ideas can be used to study the interaction of several processes and provide a new approach on the problem of the common ancestor. In the same direction, this model also provides some elements that can be used to improve models of animal reproduction.
A Markov Chain Approach to Probabilistic Swarm Guidance
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Bayard, David S.
2012-01-01
This paper introduces a probabilistic guidance approach for the coordination of swarms of autonomous agents. The main idea is to drive the swarm to a prescribed density distribution in a prescribed region of the configuration space. In its simplest form, the probabilistic approach is completely decentralized and does not require communication or collabo- ration between agents. Agents make statistically independent probabilistic decisions based solely on their own state, that ultimately guides the swarm to the desired density distribution in the configuration space. In addition to being completely decentralized, the probabilistic guidance approach has a novel autonomous self-repair property: Once the desired swarm density distribution is attained, the agents automatically repair any damage to the distribution without collaborating and without any knowledge about the damage.
A Comparison of Proliferation Resistance Measures of Misuse Scenarios Using a Markov Approach
Yue,M.; Cheng, L.-Y.; Bari, R.
2008-05-11
Misuse of declared nuclear facilities is one of the important proliferation threats. The robustness of a facility against these threats is characterized by a number of proliferation resistance (PR) measures. This paper evaluates and compares PR measures for several misuse scenarios using a Markov model approach to implement the pathway analysis methodology being developed by the PR&PP (Proliferation Resistance and Physical Protection) Expert Group. Different misue strategies can be adopted by a proliferator and each strategy is expected to have different impacts on the proliferator's success. Selected as the probabilistic measure to represent proliferation resistance, the probabilities of the proliferator's success of misusing a hypothetical ESFR (Example Sodium Fast Reactor) facility system are calculated using the Markov model based on the pathways constructed for individual misuse scenarios. Insights from a comparison of strategies that are likely to be adopted by the proliferator are discussed in this paper.
Effective degree Markov-chain approach for discrete-time epidemic processes on uncorrelated networks
NASA Astrophysics Data System (ADS)
Cai, Chao-Ran; Wu, Zhi-Xi; Guan, Jian-Yue
2014-11-01
Recently, Gómez et al. proposed a microscopic Markov-chain approach (MMCA) [S. Gómez, J. Gómez-Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. E 84, 036105 (2011), 10.1103/PhysRevE.84.036105] to the discrete-time susceptible-infected-susceptible (SIS) epidemic process and found that the epidemic prevalence obtained by this approach agrees well with that by simulations. However, we found that the approach cannot be straightforwardly extended to a susceptible-infected-recovered (SIR) epidemic process (due to its irreversible property), and the epidemic prevalences obtained by MMCA and Monte Carlo simulations do not match well when the infection probability is just slightly above the epidemic threshold. In this contribution we extend the effective degree Markov-chain approach, proposed for analyzing continuous-time epidemic processes [J. Lindquist, J. Ma, P. Driessche, and F. Willeboordse, J. Math. Biol. 62, 143 (2011), 10.1007/s00285-010-0331-2], to address discrete-time binary-state (SIS) or three-state (SIR) epidemic processes on uncorrelated complex networks. It is shown that the final epidemic size as well as the time series of infected individuals obtained from this approach agree very well with those by Monte Carlo simulations. Our results are robust to the change of different parameters, including the total population size, the infection probability, the recovery probability, the average degree, and the degree distribution of the underlying networks.
NASA Astrophysics Data System (ADS)
Espinoza-Molina, Daniela; Datcu, Mihai
2010-10-01
TerraSAR-X is the Synthetic Aperture Radar (SAR) German satellite which provides a high diversity of information due to its high-resolution. TerraSAR-X acquires daily a volume of up to 100 GB of high complexity, multi-mode SAR images, i.e. SpotLight, StripMap, and ScanSAR data, with dual or quad-polarization, and with different look angles. The high and multiple resolutions of the instrument (1m, 3m or 10m) open perspectives for new applications, that were not possible with past lower resolution sensors (20-30m). Mainly the 1m and 3m modes we expect to support a broad range of new applications related to human activities with relevant structures and objects at the 1m scale. Thus, among the most interesting scenes are: urban, industrial, and rural data. In addition, the global coverage and the relatively frequent repeat pass will definitely help to acquire extremely relevant data sets. To analyze the available TerrrSAR-X data we rely on model based methods for feature extraction and despeckling. The image information content is extracted using model-based methods based on Gauss Markov Random Field (GMRF) and Bayesian inference approach. This approach enhances the local adaptation by using a prior model, which learns the image structure and enables to estimate the local description of the structures, acting as primitive feature extraction method. However, the GMRF model-based method uses as input parameters the Model Order (MO) and the size of Estimation Window (EW). The appropriated selection of these parameters allows us to improve the classification and indexing results due to the number of well separated classes could be determined by them. Our belief is that the selection of the MO depends on the kind of information that the image contains, explaining how well the model can recognize complex structures as objects, and according to the size of EW the accuracy of the estimation is determined. In the following, we present an evaluation of the impact of the model
Wang, Ying; Hu, Haiyan; Li, Xiaoman
2016-08-01
Metagenomics is a next-generation omics field currently impacting postgenomic life sciences and medicine. Binning metagenomic reads is essential for the understanding of microbial function, compositions, and interactions in given environments. Despite the existence of dozens of computational methods for metagenomic read binning, it is still very challenging to bin reads. This is especially true for reads from unknown species, from species with similar abundance, and/or from low-abundance species in environmental samples. In this study, we developed a novel taxonomy-dependent and alignment-free approach called MBMC (Metagenomic Binning by Markov Chains). Different from all existing methods, MBMC bins reads by measuring the similarity of reads to the trained Markov chains for different taxa instead of directly comparing reads with known genomic sequences. By testing on more than 24 simulated and experimental datasets with species of similar abundance, species of low abundance, and/or unknown species, we report here that MBMC reliably grouped reads from different species into separate bins. Compared with four existing approaches, we demonstrated that the performance of MBMC was comparable with existing approaches when binning reads from sequenced species, and superior to existing approaches when binning reads from unknown species. MBMC is a pivotal tool for binning metagenomic reads in the current era of Big Data and postgenomic integrative biology. The MBMC software can be freely downloaded at http://hulab.ucf.edu/research/projects/metagenomics/MBMC.html . PMID:27447888
The Scalarization Approach to Multiobjective Markov Control Problems: Why Does It Work?
Hernandez-Lerma, Onesimo Romera, Rosario
2004-10-15
This paper concerns discrete-time multiobjective Markov control processes on Borel spaces and unbounded costs. Under mild assumptions, it is shown that the usual 'scalarization approach' to obtain Pareto policies for the multiobjective control problem is in fact equivalent to solving the dual of a certain multiobjective infinite-dimensional linear program. The latter program is obtained from a multiobjective measure problem which is also used to prove the existence of strong Pareto policies, that is, Paretopolicies whose cost vector is the closest to the control problem's virtual minimum.
Switched Fault Diagnosis Approach for Industrial Processes based on Hidden Markov Model
NASA Astrophysics Data System (ADS)
Wang, Lin; Yang, Chunjie; Sun, Youxian; Pan, Yijun; An, Ruqiao
2015-11-01
Traditional fault diagnosis methods based on hidden Markov model (HMM) use a unified method for feature extraction, such as principal component analysis (PCA), kernel principal component analysis (KPCA) and independent component analysis (ICA). However, every method has its own limitations. For example, PCA cannot extract nonlinear relationships among process variables. So it is inappropriate to extract all features of variables by only one method, especially when data characteristics are very complex. This article proposes a switched feature extraction procedure using PCA and KPCA based on nonlinearity measure. By the proposed method, we are able to choose the most suitable feature extraction method, which could improve the accuracy of fault diagnosis. A simulation from the Tennessee Eastman (TE) process demonstrates that the proposed approach is superior to the traditional one based on HMM and could achieve more accurate classification of various process faults.
A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning
NASA Astrophysics Data System (ADS)
Roth, John; Tummala, Murali; McEachen, John
2016-09-01
This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.
ERIC Educational Resources Information Center
Stifter, Cynthia A.; Rovine, Michael
2015-01-01
The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…
A Markov Random Field Model-Based Approach To Image Interpretation
NASA Astrophysics Data System (ADS)
Zhang, Jun; Modestino, James W.
1989-11-01
In this paper, a Markov random field (MRF) model-based approach to automated image interpretation is described and demonstrated as a region-based scheme. In this approach, an image is first segmented into a collection of disjoint regions which form the nodes of an adjacency graph. Image interpretation is then achieved through assigning object labels, or interpretations, to the segmented regions, or nodes, using domain knowledge, extracted feature measurements and spatial relationships between the various regions. The interpretation labels are modeled as a MRF on the corresponding adjacency graph and the image interpretation problem is formulated as a maximum a posteriori (MAP) estimation rule. Simulated annealing is used to find the best realization, or optimal MAP interpretation. Through the MRF model, this approach also provides a systematic method for organizing and representing domain knowledge through the clique functions of the pdf of the underlying MRF. Results of image interpretation experiments performed on synthetic and real-world images using this approach are described and appear promising.
Semi-Markov Arnason-Schwarz models.
King, Ruth; Langrock, Roland
2016-06-01
We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. PMID:26584064
An information theoretic approach for generating an aircraft avoidance Markov Decision Process
NASA Astrophysics Data System (ADS)
Weinert, Andrew J.
Developing a collision avoidance system that can meet safety standards required of commercial aviation is challenging. A dynamic programming approach to collision avoidance has been developed to optimize and generate logics that are robust to the complex dynamics of the national airspace. The current approach represents the aircraft avoidance problem as Markov Decision Processes and independently optimizes a horizontal and vertical maneuver avoidance logics. This is a result of the current memory requirements for each logic, simply combining the logics will result in a significantly larger representation. The "curse of dimensionality" makes it computationally inefficient and unfeasible to optimize this larger representation. However, existing and future collision avoidance systems have mostly defined the decision process by hand. In response, a simulation-based framework was built to better understand how each potential state quantifies the aircraft avoidance problem with regards to safety and operational components. The framework leverages recent advances in signals processing and database, while enabling the highest fidelity analysis of Monte Carlo aircraft encounter simulations to date. This framework enabled the calculation of how well each state of the decision process quantifies the collision risk and the associated memory requirements. Using this analysis, a collision avoidance logic that leverages both horizontal and vertical actions was built and optimized using this simulation based approach.
Song, Changyue; Liu, Kaibo; Zhang, Xi; Chen, Lili; Xian, Xiaochen
2016-07-01
Obstructive sleep apnea (OSA) syndrome is a common sleep disorder suffered by an increasing number of people worldwide. As an alternative to polysomnography (PSG) for OSA diagnosis, the automatic OSA detection methods used in the current practice mainly concentrate on feature extraction and classifier selection based on collected physiological signals. However, one common limitation in these methods is that the temporal dependence of signals are usually ignored, which may result in critical information loss for OSA diagnosis. In this study, we propose a novel OSA detection approach based on ECG signals by considering temporal dependence within segmented signals. A discriminative hidden Markov model (HMM) and corresponding parameter estimation algorithms are provided. In addition, subject-specific transition probabilities within the model are employed to characterize the subject-to-subject differences of potential OSA patients. To validate our approach, 70 recordings obtained from the Physionet Apnea-ECG database were used. Accuracies of 97.1% for per-recording classification and 86.2% for per-segment OSA detection with satisfactory sensitivity and specificity were achieved. Compared with other existing methods that simply ignore the temporal dependence of signals, the proposed HMM-based detection approach delivers more satisfactory detection performance and could be extended to other disease diagnosis applications. PMID:26560867
Gold price effect on stock market: A Markov switching vector error correction approach
NASA Astrophysics Data System (ADS)
Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok
2014-06-01
Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.
Buendia, Patricia; Cadwallader, Brice; DeGruttola, Victor
2009-01-01
Motivation: Modern HIV-1, hepatitis B virus and hepatitis C virus antiviral therapies have been successful at keeping viruses suppressed for prolonged periods of time, but therapy failures attributable to the emergence of drug resistant mutations continue to be a distressing reminder that no therapy can fully eradicate these viruses from their host organisms. To better understand the emergence of drug resistance, we combined phylogenetic and statistical models of viral evolution in a 2-phase computational approach that reconstructs mutational pathways of drug resistance. Results: The first phase of the algorithm involved the modeling of the evolution of the virus within the human host environment. The inclusion of longitudinal clonal sequence data was a key aspect of the model due to the progressive fashion in which multiple mutations become linked in the same genome creating drug resistant genotypes. The second phase involved the development of a Markov model to calculate the transition probabilities between the different genotypes. The proposed method was applied to data from an HIV-1 Efavirenz clinical trial study. The obtained model revealed the direction of evolution over time with greater detail than previous models. Our results show that the mutational pathways facilitate the identification of fast versus slow evolutionary pathways to drug resistance. Availability: Source code for the algorithm is publicly available at http://biorg.cis.fiu.edu/vPhyloMM/ Contact: pbuendia@miami.edu PMID:19654117
NASA Astrophysics Data System (ADS)
Kim, M.; Ghate, A.; Phillips, M. H.
2009-07-01
The current state of the art in cancer treatment by radiation optimizes beam intensity spatially such that tumors receive high dose radiation whereas damage to nearby healthy tissues is minimized. It is common practice to deliver the radiation over several weeks, where the daily dose is a small constant fraction of the total planned. Such a 'fractionation schedule' is based on traditional models of radiobiological response where normal tissue cells possess the ability to repair sublethal damage done by radiation. This capability is significantly less prominent in tumors. Recent advances in quantitative functional imaging and biological markers are providing new opportunities to measure patient response to radiation over the treatment course. This opens the door for designing fractionation schedules that take into account the patient's cumulative response to radiation up to a particular treatment day in determining the fraction on that day. We propose a novel approach that, for the first time, mathematically explores the benefits of such fractionation schemes. This is achieved by building a stylistic Markov decision process (MDP) model, which incorporates some key features of the problem through intuitive choices of state and action spaces, as well as transition probability and reward functions. The structure of optimal policies for this MDP model is explored through several simple numerical examples.
A Markov Chain Monte Carlo Approach to Estimate AIDS after HIV Infection.
Apenteng, Ofosuhene O; Ismail, Noor Azina
2015-01-01
The spread of human immunodeficiency virus (HIV) infection and the resulting acquired immune deficiency syndrome (AIDS) is a major health concern in many parts of the world, and mathematical models are commonly applied to understand the spread of the HIV epidemic. To understand the spread of HIV and AIDS cases and their parameters in a given population, it is necessary to develop a theoretical framework that takes into account realistic factors. The current study used this framework to assess the interaction between individuals who developed AIDS after HIV infection and individuals who did not develop AIDS after HIV infection (pre-AIDS). We first investigated how probabilistic parameters affect the model in terms of the HIV and AIDS population over a period of time. We observed that there is a critical threshold parameter, R0, which determines the behavior of the model. If R0 ≤ 1, there is a unique disease-free equilibrium; if R0 < 1, the disease dies out; and if R0 > 1, the disease-free equilibrium is unstable. We also show how a Markov chain Monte Carlo (MCMC) approach could be used as a supplement to forecast the numbers of reported HIV and AIDS cases. An approach using a Monte Carlo analysis is illustrated to understand the impact of model-based predictions in light of uncertain parameters on the spread of HIV. Finally, to examine this framework and demonstrate how it works, a case study was performed of reported HIV and AIDS cases from an annual data set in Malaysia, and then we compared how these approaches complement each other. We conclude that HIV disease in Malaysia shows epidemic behavior, especially in the context of understanding and predicting emerging cases of HIV and AIDS. PMID:26147199
A Markov Chain Monte Carlo Approach to Estimate AIDS after HIV Infection
Apenteng, Ofosuhene O.; Ismail, Noor Azina
2015-01-01
The spread of human immunodeficiency virus (HIV) infection and the resulting acquired immune deficiency syndrome (AIDS) is a major health concern in many parts of the world, and mathematical models are commonly applied to understand the spread of the HIV epidemic. To understand the spread of HIV and AIDS cases and their parameters in a given population, it is necessary to develop a theoretical framework that takes into account realistic factors. The current study used this framework to assess the interaction between individuals who developed AIDS after HIV infection and individuals who did not develop AIDS after HIV infection (pre-AIDS). We first investigated how probabilistic parameters affect the model in terms of the HIV and AIDS population over a period of time. We observed that there is a critical threshold parameter, R0, which determines the behavior of the model. If R0 ≤ 1, there is a unique disease-free equilibrium; if R0 < 1, the disease dies out; and if R0 > 1, the disease-free equilibrium is unstable. We also show how a Markov chain Monte Carlo (MCMC) approach could be used as a supplement to forecast the numbers of reported HIV and AIDS cases. An approach using a Monte Carlo analysis is illustrated to understand the impact of model-based predictions in light of uncertain parameters on the spread of HIV. Finally, to examine this framework and demonstrate how it works, a case study was performed of reported HIV and AIDS cases from an annual data set in Malaysia, and then we compared how these approaches complement each other. We conclude that HIV disease in Malaysia shows epidemic behavior, especially in the context of understanding and predicting emerging cases of HIV and AIDS. PMID:26147199
Seifert, Michael; Abou-El-Ardat, Khalil; Friedrich, Betty; Klink, Barbara; Deutsch, Andreas
2014-01-01
Changes in gene expression programs play a central role in cancer. Chromosomal aberrations such as deletions, duplications and translocations of DNA segments can lead to highly significant positive correlations of gene expression levels of neighboring genes. This should be utilized to improve the analysis of tumor expression profiles. Here, we develop a novel model class of autoregressive higher-order Hidden Markov Models (HMMs) that carefully exploit local data-dependent chromosomal dependencies to improve the identification of differentially expressed genes in tumor. Autoregressive higher-order HMMs overcome generally existing limitations of standard first-order HMMs in the modeling of dependencies between genes in close chromosomal proximity by the simultaneous usage of higher-order state-transitions and autoregressive emissions as novel model features. We apply autoregressive higher-order HMMs to the analysis of breast cancer and glioma gene expression data and perform in-depth model evaluation studies. We find that autoregressive higher-order HMMs clearly improve the identification of overexpressed genes with underlying gene copy number duplications in breast cancer in comparison to mixture models, standard first- and higher-order HMMs, and other related methods. The performance benefit is attributed to the simultaneous usage of higher-order state-transitions in combination with autoregressive emissions. This benefit could not be reached by using each of these two features independently. We also find that autoregressive higher-order HMMs are better able to identify differentially expressed genes in tumors independent of the underlying gene copy number status in comparison to the majority of related methods. This is further supported by the identification of well-known and of previously unreported hotspots of differential expression in glioblastomas demonstrating the efficacy of autoregressive higher-order HMMs for the analysis of individual tumor expression
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1995-01-01
This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.
Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.
Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter
2016-04-01
Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery. PMID:26932466
Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach
ERIC Educational Resources Information Center
Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro
2005-01-01
Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
ERIC Educational Resources Information Center
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
One-Way Markov Process Approach to Repeat Times of Large Earthquakes in Faults
NASA Astrophysics Data System (ADS)
Tejedor, Alejandro; Gomez, Javier B.; Pacheco, Amalio F.
2012-11-01
One of the uses of Markov Chains is the simulation of the seismic cycle in a fault, i.e. as a renewal model for the repetition of its characteristic earthquakes. This representation is consistent with Reid's elastic rebound theory. We propose a general one-way Markovian model in which the waiting time distribution, its first moments, coefficient of variation, and functions of error and alarm (related to the predictability of the model) can be obtained analytically. The fact that in any one-way Markov cycle the coefficient of variation of the corresponding distribution of cycle lengths is always lower than one concurs with observations of large earthquakes in seismic faults. The waiting time distribution of one of the limits of this model is the negative binomial distribution; as an application, we use it to fit the Parkfield earthquake series in the San Andreas fault, California.
NASA Astrophysics Data System (ADS)
Bacher, C.; Filgueira, R.; Guyondet, T.
2016-01-01
Markov chain analysis was recently proposed to assess the time scales and preferential pathways into biological or physical networks by computing residence time, first passage time, rates of transfer between nodes and number of passages in a node. We propose to adapt an algorithm already published for simple systems to physical systems described with a high resolution hydrodynamic model. The method is applied to bays and estuaries on the Eastern Coast of Canada for their interest in shellfish aquaculture. Current velocities have been computed by using a 2 dimensional grid of elements and circulation patterns were summarized by averaging Eulerian flows between adjacent elements. Flows and volumes allow computing probabilities of transition between elements and to assess the average time needed by virtual particles to move from one element to another, the rate of transfer between two elements, and the average residence time of each system. We also combined transfer rates and times to assess the main pathways of virtual particles released in farmed areas and the potential influence of farmed areas on other areas. We suggest that Markov chain is complementary to other sets of ecological indicators proposed to analyse the interactions between farmed areas - e.g., depletion index, carrying capacity assessment. Markov chain has several advantages with respect to the estimation of connectivity between pair of sites. It makes possible to estimate transfer rates and times at once in a very quick and efficient way, without the need to perform long term simulations of particle or tracer concentration.
Hidden Markov models and other machine learning approaches in computational molecular biology
Baldi, P.
1995-12-31
This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
A regular Markov source is defined as the output of a deterministic, but noisy, channel driven by the state sequence of a regular finite-state Markov chain. The rate of such a source is the per letter uncertainty of its digits. The well-known result that the rate of a unifilar regular Markov source is easily calculable is demonstrated, where unifilarity means that the present state of the Markov chain and the next output of the deterministic channel uniquely determine the next state. At present, there is no known method to calculate the rate of a nonunifilar source. Two tentative approaches to this unsolved problem are given, namely source identical twins and the master-slave source, which appear to shed some light on the question of rate calculation for a nonunifilar source.
Memetic Approaches for Optimizing Hidden Markov Models: A Case Study in Time Series Prediction
NASA Astrophysics Data System (ADS)
Bui, Lam Thu; Barlow, Michael
We propose a methodology for employing memetics (local search) within the framework of evolutionary algorithms to optimize parameters of hidden markov models. With this proposal, the rate and frequency of using local search are automatically changed over time either at a population or individual level. At the population level, we allow the rate of using local search to decay over time to zero (at the final generation). At the individual level, each individual is equipped with information of when it will do local search and for how long. This information evolves over time alongside the main elements of the chromosome representing the individual.
Stifter, Cynthia A.; Rovine, Michael
2016-01-01
The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at two and six months of age, used hidden Markov modeling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a 4-state model for the dyadic responses to a two-month inoculation whereas a 6-state model best described the dyadic process at six months. Two of the states at two months and three of the states at six months suggested a progression from high intensity crying to no crying with parents using vestibular and auditory soothing methods. The use of feeding and/or pacifying to soothe the infant characterized one two-month state and two six-month states. These data indicate that with maturation and experience, the mother-infant dyad is becoming more organized around the soothing interaction. Using hidden Markov modeling to describe individual differences, as well as normative processes, is also presented and discussed.
Ficz, Gabriella; Wolf, Verena; Walter, Jörn
2016-01-01
DNA methylation and demethylation are opposing processes that when in balance create stable patterns of epigenetic memory. The control of DNA methylation pattern formation by replication dependent and independent demethylation processes has been suggested to be influenced by Tet mediated oxidation of 5mC. Several alternative mechanisms have been proposed suggesting that 5hmC influences either replication dependent maintenance of DNA methylation or replication independent processes of active demethylation. Using high resolution hairpin oxidative bisulfite sequencing data, we precisely determine the amount of 5mC and 5hmC and model the contribution of 5hmC to processes of demethylation in mouse ESCs. We develop an extended hidden Markov model capable of accurately describing the regional contribution of 5hmC to demethylation dynamics. Our analysis shows that 5hmC has a strong impact on replication dependent demethylation, mainly by impairing methylation maintenance. PMID:27224554
Hidden Markov model approach to skill learning and its application to telerobotics
Yang, J. . Robotics Inst. Univ. of Akron, OH . Dept. of Electrical Engineering); Xu, Y. . Robotics Inst.); Chen, C.S. . Dept. of Electrical Engineering)
1994-10-01
In this paper, the authors discuss the problem of how human skill can be represented as a parametric model using a hidden Markov model (HMM), and how an HMM-based skill model can be used to learn human skill. HMM is feasible to characterize a doubly stochastic process--measurable action and immeasurable mental states--that is involved in the skill learning. The authors formulated the learning problem as a multidimensional HMM and developed a testbed for a variety of skill learning applications. Based on ''the most likely performance'' criterion, the best action sequence can be selected from all previously measured action data by modeling the skill as an HMM. The proposed method has been implemented in the teleoperation control of a space station robot system, and some important implementation issues have been discussed. The method allows a robot to learn human skill certain tasks and to improve motion performance.
CIGALEMC: GALAXY PARAMETER ESTIMATION USING A MARKOV CHAIN MONTE CARLO APPROACH WITH CIGALE
Serra, Paolo; Amblard, Alexandre; Temi, Pasquale; Im, Stephen; Noll, Stefan
2011-10-10
We introduce a fast Markov Chain Monte Carlo (MCMC) exploration of the astrophysical parameter space using a modified version of the publicly available code Code Investigating GALaxy Emission (CIGALE). The original CIGALE builds a grid of theoretical spectral energy distribution (SED) models and fits to photometric fluxes from ultraviolet to infrared to put constraints on parameters related to both formation and evolution of galaxies. Such a grid-based method can lead to a long and challenging parameter extraction since the computation time increases exponentially with the number of parameters considered and results can be dependent on the density of sampling points, which must be chosen in advance for each parameter. MCMC methods, on the other hand, scale approximately linearly with the number of parameters, allowing a faster and more accurate exploration of the parameter space by using a smaller number of efficiently chosen samples. We test our MCMC version of the code CIGALE (called CIGALEMC) with simulated data. After checking the ability of the code to retrieve the input parameters used to build the mock sample, we fit theoretical SEDs to real data from the well-known and -studied Spitzer Infrared Nearby Galaxy Survey sample. We discuss constraints on the parameters and show the advantages of our MCMC sampling method in terms of accuracy of the results and optimization of CPU time.
Oh, Jung Hun; Gurnani, Prem; Schorge, John; Rosenblatt, Kevin P; Gao, Jean X
2009-03-01
High-resolution matrix-assisted laser desorption/ionization time-of-flight mass spectrometry has recently shown promise as a screening tool for detecting discriminatory peptide/protein patterns. The major computational obstacle in finding such patterns is the large number of mass/charge peaks (features, biomarkers, data points) in a spectrum. To tackle this problem, we have developed methods for data preprocessing and biomarker selection. The preprocessing consists of binning, baseline correction, and normalization. An algorithm, extended Markov blanket, is developed for biomarker detection, which combines redundant feature removal and discriminant feature selection. The biomarker selection couples with support vector machine to achieve sample prediction from high-resolution proteomic profiles. Our algorithm is applied to recurrent ovarian cancer study that contains platinum-sensitive and platinum-resistant samples after treatment. Experiments show that the proposed method performs better than other feature selection algorithms. In particular, our algorithm yields good performance in terms of both sensitivity and specificity as compared to other methods. PMID:19126475
Link between unemployment and crime in the US: a Markov-Switching approach.
Fallahi, Firouz; Rodríguez, Gabriel
2014-05-01
This study has two goals. The first is to use Markov Switching models to identify and analyze the cycles in the unemployment rate and four different types of property-related criminal activities in the US. The second is to apply the nonparametric concordance index of Harding and Pagan (2006) to determine the correlation between the cycles of unemployment rate and property crimes. Findings show that there is a positive but insignificant relationship between the unemployment rate, burglary, larceny, and robbery. However, the unemployment rate has a significant and negative (i.e., a counter-cyclical) relationship with motor-vehicle theft. Therefore, more motor-vehicle thefts occur during economic expansions relative to contractions. Next, we divide the sample into three different subsamples to examine the consistency of the findings. The results show that the co-movements between the unemployment rate and property crimes during recession periods are much weaker, when compared with that of the normal periods of the US economy. PMID:24576625
Cosmological constraints on generalized Chaplygin gas model: Markov Chain Monte Carlo approach
Xu, Lixin; Lu, Jianbo E-mail: lvjianbo819@163.com
2010-03-01
We use the Markov Chain Monte Carlo method to investigate a global constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Constitution dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a non-flat universe, the constraint results for GCG model are, Ω{sub b}h{sup 2} = 0.0235{sup +0.0021}{sub −0.0018} (1σ) {sup +0.0028}{sub −0.0022} (2σ), Ω{sub k} = 0.0035{sup +0.0172}{sub −0.0182} (1σ) {sup +0.0226}{sub −0.0204} (2σ), A{sub s} = 0.753{sup +0.037}{sub −0.035} (1σ) {sup +0.045}{sub −0.044} (2σ), α = 0.043{sup +0.102}{sub −0.106} (1σ) {sup +0.134}{sub −0.117} (2σ), and H{sub 0} = 70.00{sup +3.25}{sub −2.92} (1σ) {sup +3.77}{sub −3.67} (2σ), which is more stringent than the previous results for constraint on GCG model parameters. Furthermore, according to the information criterion, it seems that the current observations much support ΛCDM model relative to the GCG model.
Johannesson, G; Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G
2005-02-07
Estimating unknown system configurations/parameters by combining system knowledge gained from a computer simulation model on one hand and from observed data on the other hand is challenging. An example of such inverse problem is detecting and localizing potential flaws or changes in a structure by using a finite-element model and measured vibration/displacement data. We propose a probabilistic approach based on Bayesian methodology. This approach does not only yield a single best-guess solution, but a posterior probability distribution over the parameter space. In addition, the Bayesian approach provides a natural framework to accommodate prior knowledge. A Markov chain Monte Carlo (MCMC) procedure is proposed to generate samples from the posterior distribution (an ensemble of likely system configurations given the data). The MCMC procedure proposed explores the parameter space at different resolutions (scales), resulting in a more robust and efficient procedure. The large-scale exploration steps are carried out using coarser-resolution finite-element models, yielding a considerable decrease in computational time, which can be a crucial for large finite-element models. An application is given using synthetic displacement data from a simple cantilever beam with MCMC exploration carried out at three different resolutions.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2014-10-01
The expectation maximization (EM) is the standard training algorithm for hidden Markov model (HMM). However, EM faces a local convergence problem in HMM estimation. This paper attempts to overcome this problem of EM and proposes hybrid metaheuristic approaches to EM for HMM. In our earlier research, a hybrid of a constraint-based evolutionary learning approach to EM (CEL-EM) improved HMM estimation. In this paper, we propose a hybrid simulated annealing stochastic version of EM (SASEM) that combines simulated annealing (SA) with EM. The novelty of our approach is that we develop a mathematical reformulation of HMM estimation by introducing a stochastic step between the EM steps and combine SA with EM to provide better control over the acceptance of stochastic and EM steps for better HMM estimation. We also extend our earlier work and propose a second hybrid which is a combination of an EA and the proposed SASEM, (EA-SASEM). The proposed EA-SASEM uses the best constraint-based EA strategies from CEL-EM and stochastic reformulation of HMM. The complementary properties of EA and SA and stochastic reformulation of HMM of SASEM provide EA-SASEM with sufficient potential to find better estimation for HMM. To the best of our knowledge, this type of hybridization and mathematical reformulation have not been explored in the context of EM and HMM training. The proposed approaches have been evaluated through comprehensive experiments to justify their effectiveness in signal modeling using the speech corpus: TIMIT. Experimental results show that proposed approaches obtain higher recognition accuracies than the EM algorithm and CEL-EM as well. PMID:24686310
Bibliometric Application of Markov Chains.
ERIC Educational Resources Information Center
Pao, Miranda Lee; McCreery, Laurie
1986-01-01
A rudimentary description of Markov Chains is presented in order to introduce its use to describe and to predict authors' movements among subareas of the discipline of ethnomusicology. Other possible applications are suggested. (Author)
Lin, Zhixiang; Li, Mingfeng; Sestan, Nenad; Zhao, Hongyu
2016-04-01
The statistical methodology developed in this study was motivated by our interest in studying neurodevelopment using the mouse brain RNA-Seq data set, where gene expression levels were measured in multiple layers in the somatosensory cortex across time in both female and male samples. We aim to identify differentially expressed genes between adjacent time points, which may provide insights on the dynamics of brain development. Because of the extremely small sample size (one male and female at each time point), simple marginal analysis may be underpowered. We propose a Markov random field (MRF)-based approach to capitalizing on the between layers similarity, temporal dependency and the similarity between sex. The model parameters are estimated by an efficient EM algorithm with mean field-like approximation. Simulation results and real data analysis suggest that the proposed model improves the power to detect differentially expressed genes than simple marginal analysis. Our method also reveals biologically interesting results in the mouse brain RNA-Seq data set. PMID:26926866
Benoit, Julia S; Chan, Wenyaw; Luo, Sheng; Yeh, Hung-Wen; Doody, Rachelle
2016-04-30
Understanding the dynamic disease process is vital in early detection, diagnosis, and measuring progression. Continuous-time Markov chain (CTMC) methods have been used to estimate state-change intensities but challenges arise when stages are potentially misclassified. We present an analytical likelihood approach where the hidden state is modeled as a three-state CTMC model allowing for some observed states to be possibly misclassified. Covariate effects of the hidden process and misclassification probabilities of the hidden state are estimated without information from a 'gold standard' as comparison. Parameter estimates are obtained using a modified expectation-maximization (EM) algorithm, and identifiability of CTMC estimation is addressed. Simulation studies and an application studying Alzheimer's disease caregiver stress-levels are presented. The method was highly sensitive to detecting true misclassification and did not falsely identify error in the absence of misclassification. In conclusion, we have developed a robust longitudinal method for analyzing categorical outcome data when classification of disease severity stage is uncertain and the purpose is to study the process' transition behavior without a gold standard. PMID:26782946
NASA Astrophysics Data System (ADS)
Haruna, T.; Nakajima, K.
2013-06-01
The duality between values and orderings is a powerful tool to discuss relationships between various information-theoretic measures and their permutation analogues for discrete-time finite-alphabet stationary stochastic processes (SSPs). Applying it to output processes of hidden Markov models with ergodic internal processes, we have shown in our previous work that the excess entropy and the transfer entropy rate coincide with their permutation analogues. In this paper, we discuss two permutation characterizations of the two measures for general ergodic SSPs not necessarily having the Markov property assumed in our previous work. In the first approach, we show that the excess entropy and the transfer entropy rate of an ergodic SSP can be obtained as the limits of permutation analogues of them for the N-th order approximation by hidden Markov models, respectively. In the second approach, we employ the modified permutation partition of the set of words which considers equalities of symbols in addition to permutations of words. We show that the excess entropy and the transfer entropy rate of an ergodic SSP are equal to their modified permutation analogues, respectively.
Cordero-Grande, Lucilio; Vegas-Sánchez-Ferrero, Gonzalo; Casaseca-de-la-Higuera, Pablo; Alberola-López, Carlos
2012-04-01
This paper proposes a topology-preserving multiresolution elastic registration method based on a discrete Markov random field of deformations and a block-matching procedure. The method is applied to the object-based interpolation of tomographic slices. For that purpose, the fidelity of a given deformation to the data is established by a block-matching strategy based on intensity- and gradient-related features, the smoothness of the transformation is favored by an appropriate prior on the field, and the deformation is guaranteed to maintain the topology by imposing some hard constraints on the local configurations of the field. The resulting deformation is defined as the maximum a posteriori configuration. Additionally, the relative influence of the fidelity and smoothness terms is weighted by the unsupervised estimation of the field parameters. In order to obtain an unbiased interpolation result, the registration is performed both in the forward and backward directions, and the resulting transformations are combined by using the local information content of the deformation. The method is applied to magnetic resonance and computed tomography acquisitions of the brain and the torso. Quantitative comparisons offer an overall improvement in performance with respect to related works in the literature. Additionally, the application of the interpolation method to cardiac magnetic resonance images has shown that the removal of any of the main components of the algorithm results in a decrease in performance which has proven to be statistically significant. PMID:21997265
Geng, Bo; Zhou, Xiaobo; Zhu, Jinmin; Hung, Y S; Wong, Stephen T C
2008-04-01
Computational identification of missing enzymes plays a significant role in accurate and complete reconstruction of metabolic network for both newly sequenced and well-studied organisms. For a metabolic reaction, given a set of candidate enzymes identified according to certain biological evidences, a powerful mathematical model is required to predict the actual enzyme(s) catalyzing the reactions. In this study, several plausible predictive methods are considered for the classification problem in missing enzyme identification, and comparisons are performed with an aim to identify a method with better performance than the Bayesian model used in previous work. In particular, a regression model consisting of a linear term and a nonlinear term is proposed to apply to the problem, in which the reversible jump Markov-chain-Monte-Carlo (MCMC) learning technique (developed in [Andrieu C, Freitas Nando de, Doucet A. Robust full Bayesian learning for radial basis networks 2001;13:2359-407.]) is adopted to estimate the model order and the parameters. We evaluated the models using known reactions in Escherichia coli, Mycobacterium tuberculosis, Vibrio cholerae and Caulobacter cresentus bacteria, as well as one eukaryotic organism, Saccharomyces Cerevisiae. Although support vector regression also exhibits comparable performance in this application, it was demonstrated that the proposed model achieves favorable prediction performance, particularly sensitivity, compared with the Bayesian method. PMID:17950040
Kelley, Nicholas W.; Vishal, V.; Krafft, Grant A.; Pande, Vijay S.
2008-01-01
Here, we present a novel computational approach for describing the formation of oligomeric assemblies at experimental concentrations and timescales. We propose an extension to the Markovian state model approach, where one includes low concentration oligomeric states analytically. This allows simulation on long timescales (seconds timescale) and at arbitrarily low concentrations (e.g., the micromolar concentrations found in experiments), while still using an all-atom model for protein and solvent. As a proof of concept, we apply this methodology to the oligomerization of an Aβ peptide fragment (Aβ21–43). Aβ oligomers are now widely recognized as the primary neurotoxic structures leading to Alzheimer’s disease. Our computational methods predict that Aβ trimers form at micromolar concentrations in 10 ms, while tetramers form 1000 times more slowly. Moreover, the simulation results predict specific intermonomer contacts present in the oligomer ensemble as well as putative structures for small molecular weight oligomers. Based on our simulations and statistical models, we propose a novel mutation to stabilize the trimeric form of Aβ in an experimentally verifiable manner. PMID:19063575
NASA Astrophysics Data System (ADS)
Frank, T. D.
2008-06-01
Some elementary properties and examples of Markov processes are reviewed. It is shown that the definition of the Markov property naturally leads to a classification of Markov processes into linear and nonlinear ones.
Borchani, Hanen; Bielza, Concha; Martı Nez-Martı N, Pablo; Larrañaga, Pedro
2012-12-01
Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson's patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson's disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables. PMID:22897950
Stochastic Inversion of Electrical Resistivity Changes Using a Markov Chain, Monte Carlo Approach
Ramirez, A; Nitao, J; Hanley, W; Aines, R; Glaser, R; Sengupta, S; Dyer, K; Hickling, T; Daily, W
2004-09-21
We describe a stochastic inversion method for mapping subsurface regions where the electrical resistivity is changing. The technique combines prior information, electrical resistance data and forward models to produce subsurface resistivity models that are most consistent with all available data. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. Attractive features include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate and, (2) allow alternative model estimates to be identified, compared and ranked. Methods that monitor convergence and summarize important trends of the posterior distribution are introduced. Results from a physical model test and a field experiment were used to assess performance. The stochastic inversions presented provide useful estimates of the most probable location, shape, and volume of the changing region, and the most likely resistivity change. The proposed method is computationally expensive, requiring the use of extensive computational resources to make its application practical.
Markov Analysis of Sleep Dynamics
NASA Astrophysics Data System (ADS)
Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.
2009-05-01
A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.
Isojunno, Saana; Miller, Patrick J O
2016-01-01
The biological consequences of behavioral responses to anthropogenic noise depend on context. We explore the links between individual motivation, condition, and external constraints in a concept model and illustrate the use of motivational-behavioral states as a means to quantify the biologically relevant effects of tagging. Behavioral states were estimated from multiple streams of data in a hidden Markov model and used to test the change in foraging effort and the change in energetic success or cost given the effort. The presence of a tag boat elicited a short-term reduction in time spent in foraging states but not for proxies for success or cost within foraging states. PMID:26610996
An Approach to World Order Studies and the World System.
ERIC Educational Resources Information Center
Falk, Richard; Kim, Samuel S.
This paper discusses an approach to world order studies which can be used to supplant traditional international relations courses. It is one of a series of working papers commissioned by the World Order Models Project (WOMP) in its effort to stimulate research, education, dialogue, and political action aimed at contributing to a movement for a…
Zhang, Lixian; Ning, Zepeng; Shi, Peng
2015-11-01
This paper is concerned with H∞ control problem for a class of discrete-time Takagi-Sugeno fuzzy Markov jump systems with time-varying delays under unreliable communication links. It is assumed that the data transmission between the plant and the controller are subject to randomly occurred packet dropouts satisfying Bernoulli distribution and the dropout rate is uncertain. Based on a fuzzy-basis-dependent and mode-dependent Lyapunov function, the existence conditions of the desired H∞ state-feedback controllers are derived by employing the scaled small gain theorem such that the closed-loop system is stochastically stable and achieves a guaranteed H∞ performance. The gains of the controllers are constructed by solving a set of linear matrix inequalities. Finally, a practical example of robot arm is provided to illustrate the performance of the proposed approach. PMID:26470060
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
Narasimhan, Vagheesh; Danecek, Petr; Scally, Aylwyn; Xue, Yali; Tyler-Smith, Chris; Durbin, Richard
2016-01-01
Summary: Runs of homozygosity (RoHs) are genomic stretches of a diploid genome that show identical alleles on both chromosomes. Longer RoHs are unlikely to have arisen by chance but are likely to denote autozygosity, whereby both copies of the genome descend from the same recent ancestor. Early tools to detect RoH used genotype array data, but substantially more information is available from sequencing data. Here, we present and evaluate BCFtools/RoH, an extension to the BCFtools software package, that detects regions of autozygosity in sequencing data, in particular exome data, using a hidden Markov model. By applying it to simulated data and real data from the 1000 Genomes Project we estimate its accuracy and show that it has higher sensitivity and specificity than existing methods under a range of sequencing error rates and levels of autozygosity. Availability and implementation: BCFtools/RoH and its associated binary/source files are freely available from https://github.com/samtools/BCFtools. Contact: vn2@sanger.ac.uk or pd3@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26826718
Borodovsky, M.
2013-04-11
Algorithmic methods for gene prediction have been developed and successfully applied to many different prokaryotic genome sequences. As the set of genes in a particular genome is not homogeneous with respect to DNA sequence composition features, the GeneMark.hmm program utilizes two Markov models representing distinct classes of protein coding genes denoted "typical" and "atypical". Atypical genes are those whose DNA features deviate significantly from those classified as typical and they represent approximately 10% of any given genome. In addition to the inherent interest of more accurately predicting genes, the atypical status of these genes may also reflect their separate evolutionary ancestry from other genes in that genome. We hypothesize that atypical genes are largely comprised of those genes that have been relatively recently acquired through lateral gene transfer (LGT). If so, what fraction of atypical genes are such bona fide LGTs? We have made atypical gene predictions for all fully completed prokaryotic genomes; we have been able to compare these results to other "surrogate" methods of LGT prediction.
Coupled two-order-parameter approach to granular superconductors
Falci, G.; Fazio, R.; Giaquinta, G.
1989-05-01
A two-order-parameter approach to the theory of granular superconductors is presented, in which the paracoherent transition and the superconductive one are treated accounting for their mutual coupling. The appearance of a tricritical line in the phase diagram is discussed. Renormalization of the amplitude of the order parameter is predicted in the whole temperature range, leading to a satisfactory understanding of experimental data previously unexplained.
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
NASA Astrophysics Data System (ADS)
Farr, Benjamin; Kalogera, Vicky; Luijten, Erik
2014-07-01
We introduce a new Markov-chain Monte Carlo (MCMC) approach designed for the efficient sampling of highly correlated and multimodal posteriors. Parallel tempering, though effective, is a costly technique for sampling such posteriors. Our approach minimizes the use of parallel tempering, only applying it for a short time to build a proposal distribution that is based upon estimation of the kernel density and tuned to the target posterior. This proposal makes subsequent use of parallel tempering unnecessary, allowing all chains to be cooled to sample the target distribution. Gains in efficiency are found to increase with increasing posterior complexity, ranging from tens of percent in the simplest cases to over a factor of 10 for the more complex cases. Our approach is particularly useful in the context of parameter estimation of gravitational-wave signals measured by ground-based detectors, which is currently done through Bayesian inference with MCMC, one of the leading sampling methods. Posteriors for these signals are typically multimodal with strong nonlinear correlations, making sampling difficult. As we enter the advanced-detector era, improved sensitivities and wider bandwidths will drastically increase the computational cost of analyses, demanding more efficient search algorithms to meet these challenges.
NASA Technical Reports Server (NTRS)
Panday, Prajjwal K.; Williams, Christopher A.; Frey, Karen E.; Brown, Molly E.
2013-01-01
Previous studies have drawn attention to substantial hydrological changes taking place in mountainous watersheds where hydrology is dominated by cryospheric processes. Modelling is an important tool for understanding these changes but is particularly challenging in mountainous terrain owing to scarcity of ground observations and uncertainty of model parameters across space and time. This study utilizes a Markov Chain Monte Carlo data assimilation approach to examine and evaluate the performance of a conceptual, degree-day snowmelt runoff model applied in the Tamor River basin in the eastern Nepalese Himalaya. The snowmelt runoff model is calibrated using daily streamflow from 2002 to 2006 with fairly high accuracy (average Nash-Sutcliffe metric approx. 0.84, annual volume bias <3%). The Markov Chain Monte Carlo approach constrains the parameters to which the model is most sensitive (e.g. lapse rate and recession coefficient) and maximizes model fit and performance. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall compared with simulations using observed station precipitation. The average snowmelt contribution to total runoff in the Tamor River basin for the 2002-2006 period is estimated to be 29.7+/-2.9% (which includes 4.2+/-0.9% from snowfall that promptly melts), whereas 70.3+/-2.6% is attributed to contributions from rainfall. On average, the elevation zone in the 4000-5500m range contributes the most to basin runoff, averaging 56.9+/-3.6% of all snowmelt input and 28.9+/-1.1% of all rainfall input to runoff. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall versus snowmelt compared with simulations using observed station precipitation. Model experiments indicate that the hydrograph itself does not constrain estimates of snowmelt versus rainfall contributions to total outflow but that this derives from the degree
Anatomy Ontology Matching Using Markov Logic Networks
Li, Chunhua; Zhao, Pengpeng; Wu, Jian; Cui, Zhiming
2016-01-01
The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment. PMID:27382498
Stroeer, Alexander; Veitch, John
2009-09-15
The Laser Interferometer Space Antenna (LISA) defines new demands on data analysis efforts in its all-sky gravitational wave survey, recording simultaneously thousands of galactic compact object binary foreground sources and tens to hundreds of background sources like binary black hole mergers and extreme-mass ratio inspirals. We approach this problem with an adaptive and fully automatic Reversible Jump Markov Chain Monte Carlo sampler, able to sample from the joint posterior density function (as established by Bayes theorem) for a given mixture of signals ''out of the box'', handling the total number of signals as an additional unknown parameter beside the unknown parameters of each individual source and the noise floor. We show in examples from the LISA Mock Data Challenge implementing the full response of LISA in its TDI description that this sampler is able to extract monochromatic Double White Dwarf signals out of colored instrumental noise and additional foreground and background noise successfully in a global fitting approach. We introduce 2 examples with fixed number of signals (MCMC sampling), and 1 example with unknown number of signals (RJ-MCMC), the latter further promoting the idea behind an experimental adaptation of the model indicator proposal densities in the main sampling stage. We note that the experienced runtimes and degeneracies in parameter extraction limit the shown examples to the extraction of a low but realistic number of signals.
Ma, Junsheng; Chan, Wenyaw; Tsai, Chu-Lin; Xiong, Momiao; Tilley, Barbara C
2015-11-30
Continuous time Markov chain (CTMC) models are often used to study the progression of chronic diseases in medical research but rarely applied to studies of the process of behavioral change. In studies of interventions to modify behaviors, a widely used psychosocial model is based on the transtheoretical model that often has more than three states (representing stages of change) and conceptually permits all possible instantaneous transitions. Very little attention is given to the study of the relationships between a CTMC model and associated covariates under the framework of transtheoretical model. We developed a Bayesian approach to evaluate the covariate effects on a CTMC model through a log-linear regression link. A simulation study of this approach showed that model parameters were accurately and precisely estimated. We analyzed an existing data set on stages of change in dietary intake from the Next Step Trial using the proposed method and the generalized multinomial logit model. We found that the generalized multinomial logit model was not suitable for these data because it ignores the unbalanced data structure and temporal correlation between successive measurements. Our analysis not only confirms that the nutrition intervention was effective but also provides information on how the intervention affected the transitions among the stages of change. We found that, compared with the control group, subjects in the intervention group, on average, spent substantively less time in the precontemplation stage and were more/less likely to move from an unhealthy/healthy state to a healthy/unhealthy state. PMID:26123093
Indexed semi-Markov process for wind speed modeling.
NASA Astrophysics Data System (ADS)
Petroni, F.; D'Amico, G.; Prattico, F.
2012-04-01
Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. In a previous work we proposed different semi-Markov models, showing their ability to reproduce the autocorrelation structures of wind speed data. In that paper we showed also that the autocorrelation is higher with respect to the Markov model. Unfortunately this autocorrelation was still too small compared to the empirical one. In order to overcome the problem of low autocorrelation, in this paper we propose an indexed semi-Markov model. More precisely we assume that wind speed is described by a discrete time homogeneous semi-Markov process. We introduce a memory index which takes into account the periods of different wind activities. With this model the statistical characteristics of wind speed are faithfully reproduced. The wind is a very unstable phenomenon characterized by a sequence of lulls and sustained speeds, and a good wind generator must be able to reproduce such sequences. To check the validity of the predictive semi-Markovian model, the persistence of synthetic winds were calculated, then averaged and computed. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic generation of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating
Structure of ordered coaxial and scroll nanotubes: general approach.
Khalitov, Zufar; Khadiev, Azat; Valeeva, Diana; Pashin, Dmitry
2016-01-01
The explicit formulas for atomic coordinates of multiwalled coaxial and cylindrical scroll nanotubes with ordered structure are developed on the basis of a common oblique lattice. According to this approach, a nanotube is formed by transfer of its bulk analogue structure onto a cylindrical surface (with a circular or spiral cross section) and the chirality indexes of the tube are expressed in the number of unit cells. The monoclinic polytypic modifications of ordered coaxial and scroll nanotubes are also discussed and geometrical conditions of their formation are analysed. It is shown that tube radii of ordered multiwalled coaxial nanotubes are multiples of the layer thickness, and the initial turn radius of the orthogonal scroll nanotube is a multiple of the same parameter or its half. PMID:26697865
Speckle reduction via higher order total variation approach.
Wensen Feng; Hong Lei; Yang Gao
2014-04-01
Multiplicative noise (also known as speckle) reduction is a prerequisite for many image-processing tasks in coherent imaging systems, such as the synthetic aperture radar. One approach extensively used in this area is based on total variation (TV) regularization, which can recover significantly sharp edges of an image, but suffers from the staircase-like artifacts. In order to overcome the undesirable deficiency, we propose two novel models for removing multiplicative noise based on total generalized variation (TGV) penalty. The TGV regularization has been mathematically proven to be able to eliminate the staircasing artifacts by being aware of higher order smoothness. Furthermore, an efficient algorithm is developed for solving the TGV-based optimization problems. Numerical experiments demonstrate that our proposed methods achieve state-of-the-art results, both visually and quantitatively. In particular, when the image has some higher order smoothness, our methods outperform the TV-based algorithms. PMID:24808350
Stochastic seismic tomography by interacting Markov chains
NASA Astrophysics Data System (ADS)
Bottero, Alexis; Gesret, Alexandrine; Romary, Thomas; Noble, Mark; Maisons, Christophe
2016-07-01
Markov chain Monte Carlo sampling methods are widely used for non-linear Bayesian inversion where no analytical expression for the forward relation between data and model parameters is available. Contrary to the linear(ized) approaches they naturally allow to evaluate the uncertainties on the model found. Nevertheless their use is problematic in high dimensional model spaces especially when the computational cost of the forward problem is significant and/or the a posteriori distribution is multimodal. In this case the chain can stay stuck in one of the modes and hence not provide an exhaustive sampling of the distribution of interest. We present here a still relatively unknown algorithm that allows interaction between several Markov chains at different temperatures. These interactions (based on Importance Resampling) ensure a robust sampling of any posterior distribution and thus provide a way to efficiently tackle complex fully non linear inverse problems. The algorithm is easy to implement and is well adapted to run on parallel supercomputers. In this paper the algorithm is first introduced and applied to a synthetic multimodal distribution in order to demonstrate its robustness and efficiency compared to a Simulated Annealing method. It is then applied in the framework of first arrival traveltime seismic tomography on real data recorded in the context of hydraulic fracturing. To carry out this study a wavelet based adaptive model parametrization has been used. This allows to integrate the a priori information provided by sonic logs and to reduce optimally the dimension of the problem.
Yamamoto, Toshiyuki; Shimojima, Keiko; Ondo, Yumiko; Imai, Katsumi; Chong, Pin Fee; Kira, Ryutaro; Amemiya, Mitsuhiro; Saito, Akira; Okamoto, Nobuhiko
2016-01-01
Next-generation sequencing (NGS) is widely used for the detection of disease-causing nucleotide variants. The challenges associated with detecting copy number variants (CNVs) using NGS analysis have been reported previously. Disease-related exome panels such as Illumina TruSight One are more cost-effective than whole-exome sequencing (WES) because of their selective target regions (~21% of the WES). In this study, CNVs were analyzed using data extracted through a disease-related exome panel analysis and the eXome Hidden Markov Model (XHMM). Samples from 61 patients with undiagnosed developmental delays and 52 healthy parents were included in this study. In the preliminary study to validate the constructed XHMM system (microarray-first approach), 34 patients who had previously been analyzed by chromosomal microarray testing were used. Among the five CNVs larger than 200 kb that were considered as non-pathogenic CNVs and were used as positive controls, four CNVs was successfully detected. The system was subsequently used to analyze different samples from 27 patients (NGS-first approach); 2 of these patients were successfully diagnosed as having pathogenic CNVs (an unbalanced translocation der(5)t(5;14) and a 16p11.2 duplication). These diagnoses were re-confirmed by chromosomal microarray testing and/or fluorescence in situ hybridization. The NGS-first approach generated no false-negative or false-positive results for pathogenic CNVs, indicating its high sensitivity and specificity in detecting pathogenic CNVs. The results of this study show the possible clinical utility of pathogenic CNV screening using disease-related exome panel analysis and XHMM. PMID:27579173
Yamamoto, Toshiyuki; Shimojima, Keiko; Ondo, Yumiko; Imai, Katsumi; Chong, Pin Fee; Kira, Ryutaro; Amemiya, Mitsuhiro; Saito, Akira; Okamoto, Nobuhiko
2016-01-01
Next-generation sequencing (NGS) is widely used for the detection of disease-causing nucleotide variants. The challenges associated with detecting copy number variants (CNVs) using NGS analysis have been reported previously. Disease-related exome panels such as Illumina TruSight One are more cost-effective than whole-exome sequencing (WES) because of their selective target regions (~21% of the WES). In this study, CNVs were analyzed using data extracted through a disease-related exome panel analysis and the eXome Hidden Markov Model (XHMM). Samples from 61 patients with undiagnosed developmental delays and 52 healthy parents were included in this study. In the preliminary study to validate the constructed XHMM system (microarray-first approach), 34 patients who had previously been analyzed by chromosomal microarray testing were used. Among the five CNVs larger than 200 kb that were considered as non-pathogenic CNVs and were used as positive controls, four CNVs was successfully detected. The system was subsequently used to analyze different samples from 27 patients (NGS-first approach); 2 of these patients were successfully diagnosed as having pathogenic CNVs (an unbalanced translocation der(5)t(5;14) and a 16p11.2 duplication). These diagnoses were re-confirmed by chromosomal microarray testing and/or fluorescence in situ hybridization. The NGS-first approach generated no false-negative or false-positive results for pathogenic CNVs, indicating its high sensitivity and specificity in detecting pathogenic CNVs. The results of this study show the possible clinical utility of pathogenic CNV screening using disease-related exome panel analysis and XHMM. PMID:27579173
A new approach for second-order perturbation theory.
Tomlinson, David G; Asadchev, Andrey; Gordon, Mark S
2016-05-30
A new second-order perturbation theory (MP2) approach is presented for closed shell energy evaluations. The new algorithm has a significantly lower memory footprint, a lower FLOP (floating point operations) count, and a lower time to solution compared to previously implemented parallel MP2 methods in the GAMESS software package. Additionally, this algorithm features an adaptive approach for the disk/distributed memory storage of the MP2 amplitudes. The algorithm works well on a single workstation, small cluster, and large Cray cluster, and it allows one to perform large calculations with thousands of basis functions in a matter of hours on a single workstation. The same algorithm has been adapted for graphical processing unit (GPU) architecture. The performance of the new GPU algorithm is also discussed. © 2016 Wiley Periodicals, Inc. PMID:26940648
Markov state models of protein misfolding.
Sirur, Anshul; De Sancho, David; Best, Robert B
2016-02-21
Markov state models (MSMs) are an extremely useful tool for understanding the conformational dynamics of macromolecules and for analyzing MD simulations in a quantitative fashion. They have been extensively used for peptide and protein folding, for small molecule binding, and for the study of native ensemble dynamics. Here, we adapt the MSM methodology to gain insight into the dynamics of misfolded states. To overcome possible flaws in root-mean-square deviation (RMSD)-based metrics, we introduce a novel discretization approach, based on coarse-grained contact maps. In addition, we extend the MSM methodology to include "sink" states in order to account for the irreversibility (on simulation time scales) of processes like protein misfolding. We apply this method to analyze the mechanism of misfolding of tandem repeats of titin domains, and how it is influenced by confinement in a chaperonin-like cavity. PMID:26897000
Markov state models of protein misfolding
NASA Astrophysics Data System (ADS)
Sirur, Anshul; De Sancho, David; Best, Robert B.
2016-02-01
Markov state models (MSMs) are an extremely useful tool for understanding the conformational dynamics of macromolecules and for analyzing MD simulations in a quantitative fashion. They have been extensively used for peptide and protein folding, for small molecule binding, and for the study of native ensemble dynamics. Here, we adapt the MSM methodology to gain insight into the dynamics of misfolded states. To overcome possible flaws in root-mean-square deviation (RMSD)-based metrics, we introduce a novel discretization approach, based on coarse-grained contact maps. In addition, we extend the MSM methodology to include "sink" states in order to account for the irreversibility (on simulation time scales) of processes like protein misfolding. We apply this method to analyze the mechanism of misfolding of tandem repeats of titin domains, and how it is influenced by confinement in a chaperonin-like cavity.
Gao, Hong; Williamson, Scott; Bustamante, Carlos D.
2007-01-01
Nonrandom mating induces correlations in allelic states within and among loci that can be exploited to understand the genetic structure of natural populations (Wright 1965). For many species, it is of considerable interest to quantify the contribution of two forms of nonrandom mating to patterns of standing genetic variation: inbreeding (mating among relatives) and population substructure (limited dispersal of gametes). Here, we extend the popular Bayesian clustering approach STRUCTURE (Pritchard et al. 2000) for simultaneous inference of inbreeding or selfing rates and population-of-origin classification using multilocus genetic markers. This is accomplished by eliminating the assumption of Hardy–Weinberg equilibrium within clusters and, instead, calculating expected genotype frequencies on the basis of inbreeding or selfing rates. We demonstrate the need for such an extension by showing that selfing leads to spurious signals of population substructure using the standard STRUCTURE algorithm with a bias toward spurious signals of admixture. We gauge the performance of our method using extensive coalescent simulations and demonstrate that our approach can correct for this bias. We also apply our approach to understanding the population structure of the wild relative of domesticated rice, Oryza rufipogon, an important partially selfing grass species. Using a sample of n = 16 individuals sequenced at 111 random loci, we find strong evidence for existence of two subpopulations, which correlates well with geographic location of sampling, and estimate selfing rates for both groups that are consistent with estimates from experimental data (s ≈ 0.48–0.70). PMID:17483417
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM). PMID:19068441
NASA Astrophysics Data System (ADS)
MacBean, Natasha; Disney, Mathias; Lewis, Philip; Ineson, Phil
2010-05-01
profile as a whole. We present results from an Observing System Simulation Experiment (OSSE) designed to investigate the impact of management and climate change on peatland carbon fluxes, as well as how observations from satellites may be able to constrain modeled carbon fluxes. We use an adapted version of the Carnegie-Ames-Stanford Approach (CASA) model (Potter et al., 1993) that includes a representation of methane dynamics (Potter, 1997). The model formulation is further modified to allow for assimilation of satellite observations of surface soil moisture and land surface temperature. The observations are used to update model estimates using a Metropolis Hastings Markov Chain Monte Carlo (MCMC) approach. We examine the effect of temporal frequency and precision of satellite observations with a view to establishing how, and at what level, such observations would make a significant improvement in model uncertainty. We compare this with the system characteristics of existing and future satellites. We believe this is the first attempt to assimilate surface soil moisture and land surface temperature into an ecosystem model that includes a full representation of CH4 flux. Bubier, J., and T. Moore (1994), An ecological perspective on methane emissions from northern wetlands, TREE, 9, 460-464. Charman, D. (2002), Peatlands and Environmental Change, JohnWiley and Sons, Ltd, England. Gorham, E. (1991), Northern peatlands: Role in the carbon cycle and probable responses to climatic warming, Ecological Applications, 1, 182-195. Lai, D. (2009), Methane dynamics in northern peatlands: A review, Pedosphere, 19, 409-421. Le Mer, J., and P. Roger (2001), Production, oxidation, emission and consumption of methane by soils: A review, European Journal of Soil Biology, 37, 25-50. Limpens, J., F. Berendse, J. Canadell, C. Freeman, J. Holden, N. Roulet, H. Rydin, and Potter, C. (1997), An ecosystem simulation model for methane production and emission from wetlands, Global Biogeochemical
Fridman, Arthur
2003-01-01
Markov random fields can encode complex probabilistic relationships involving multiple variables and admit efficient procedures for probabilistic inference. However, from a knowledge engineering point of view, these models suffer from a serious limitation. The graph of a Markov field must connect all pairs of variables that are conditionally dependent even for a single choice of values of the other variables. This makes it hard to encode interactions that occur only in a certain context and are absent in all others. Furthermore, the requirement that two variables be connected unless always conditionally independent may lead to excessively dense graphs, obscuring the independencies present among the variables and leading to computationally prohibitive inference algorithms. Mumford [Mumford, D. (1996) in ICIAM 95, eds. Kirchgassner, K., Marenholtz, O. & Mennicken, R. (Akademie Verlag, Berlin), pp. 233–256] proposed an alternative modeling framework where the graph need not be rigid and completely determined a priori. Mixed Markov models contain node-valued random variables that, when instantiated, augment the graph by a set of transient edges. A single joint probability distribution relates the values of regular and node-valued variables. In this article, we study the analytical and computational properties of mixed Markov models. In particular, we show that positive mixed models have a local Markov property that is equivalent to their global factorization. We also describe a computationally efficient procedure for answering probabilistic queries in mixed Markov models. PMID:12829802
Zeroth-order regular approximation approach to molecular parity violation
Berger, Robert; Langermann, Norbert; Wuellen, Christoph van
2005-04-01
We present an ab initio (quasirelativistic) two-component approach to the computation of molecular parity-violating effects which is based on the zeroth-order regular approximation (ZORA). As a first application, we compute the parity-violating energy differences between various P and M conformations of C{sub 2}-symmetric molecules belonging to the series H{sub 2}X{sub 2} with X=O, S, Se, Te, Po. The results are compared to previously reported (relativistic) four-component Dirac-Hartree-Fock-Coulomb (DHFC) data. Relative deviations between ZORA and DHFC values are well below 2% for diselane and the heavier homologs whereas somewhat larger relative deviations are observed for the lighter homologs. The larger deviations for lighter systems are attributed to the (nonlocal) exchange terms coupling large and small components, which have been neglected in the present ZORA implementation. For heavier systems these play a minor role, which explains the good performance of the ZORA approach. An excellent performance, even for lighter systems, is expected for a related density-functional-theory-based ZORA because then the exchange terms coupling large and small components are absent.
Jacob, Benjamin G; Griffith, Daniel A; Muturi, Ephantus J; Caamano, Erick X; Githure, John I; Novak, Robert J
2009-01-01
configuration matrices were then used to define expectations for prior distributions using a Markov chain Monte Carlo (MCMC) algorithm. A set of posterior means were defined in WinBUGS 1.4.3®. After the model had converged, samples from the conditional distributions were used to summarize the posterior distribution of the parameters. Thereafter, a spatial residual trend analyses was used to evaluate variance uncertainty propagation in the model using an autocovariance error matrix. Results By specifying coefficient estimates in a Bayesian framework, the covariate number of tillers was found to be a significant predictor, positively associated with An. arabiensis aquatic habitats. The spatial filter models accounted for approximately 19% redundant locational information in the ecological sampled An. arabiensis aquatic habitat data. In the residual error estimation model there was significant positive autocorrelation (i.e., clustering of habitats in geographic space) based on log-transformed larval/pupal data and the sampled covariate depth of habitat. Conclusion An autocorrelation error covariance matrix and a spatial filter analyses can prioritize mosquito control strategies by providing a computationally attractive and feasible description of variance uncertainty estimates for correctly identifying clusters of prolific An. arabiensis aquatic habitats based on larval/pupal productivity. PMID:19772590
Zhou, Ping; Bai, Rongji
2014-01-01
Based on a new stability result of equilibrium point in nonlinear fractional-order systems for fractional-order lying in 1 < q < 2, one adaptive synchronization approach is established. The adaptive synchronization for the fractional-order Lorenz chaotic system with fractional-order 1 < q < 2 is considered. Numerical simulations show the validity and feasibility of the proposed scheme. PMID:25247207
Leng, Ning; Li, Yuan; McIntosh, Brian E.; Nguyen, Bao Kim; Duffin, Bret; Tian, Shulan; Thomson, James A.; Dewey, Colin N.; Stewart, Ron; Kendziorski, Christina
2015-01-01
Motivation: With improvements in next-generation sequencing technologies and reductions in price, ordered RNA-seq experiments are becoming common. Of primary interest in these experiments is identifying genes that are changing over time or space, for example, and then characterizing the specific expression changes. A number of robust statistical methods are available to identify genes showing differential expression among multiple conditions, but most assume conditions are exchangeable and thereby sacrifice power and precision when applied to ordered data. Results: We propose an empirical Bayes mixture modeling approach called EBSeq-HMM. In EBSeq-HMM, an auto-regressive hidden Markov model is implemented to accommodate dependence in gene expression across ordered conditions. As demonstrated in simulation and case studies, the output proves useful in identifying differentially expressed genes and in specifying gene-specific expression paths. EBSeq-HMM may also be used for inference regarding isoform expression. Availability and implementation: An R package containing examples and sample datasets is available at Bioconductor. Contact: kendzior@biostat.wisc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25847007
El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar
2014-11-01
Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data. PMID:26353069
Fractional System Identification: An Approach Using Continuous Order-Distributions
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Lorenzo, Carl F.
1999-01-01
This paper discusses the identification of fractional- and integer-order systems using the concept of continuous order-distribution. Based on the ability to define systems using continuous order-distributions, it is shown that frequency domain system identification can be performed using least squares techniques after discretizing the order-distribution.
Performability analysis using semi-Markov reward processes
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Marie, Raymond A.; Sericola, Bruno; Trivedi, Kishor S.
1990-01-01
Beaudry (1978) proposed a simple method of computing the distribution of performability in a Markov reward process. Two extensions of Beaudry's approach are presented. The method is generalized to a semi-Markov reward process by removing the restriction requiring the association of zero reward to absorbing states only. The algorithm proceeds by replacing zero-reward nonabsorbing states by a probabilistic switch; it is therefore related to the elimination of vanishing states from the reachability graph of a generalized stochastic Petri net and to the elimination of fast transient states in a decomposition approach to stiff Markov chains. The use of the approach is illustrated with three applications.
Evaluation of Usability Utilizing Markov Models
ERIC Educational Resources Information Center
Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane
2012-01-01
Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…
Homogeneous Superpixels from Markov Random Walks
NASA Astrophysics Data System (ADS)
Perbet, Frank; Stenger, Björn; Maki, Atsuto
This paper presents a novel algorithm to generate homogeneous superpixels from Markov random walks. We exploit Markov clustering (MCL) as the methodology, a generic graph clustering method based on stochastic flow circulation. In particular, we introduce a graph pruning strategy called compact pruning in order to capture intrinsic local image structure. The resulting superpixels are homogeneous, i.e. uniform in size and compact in shape. The original MCL algorithm does not scale well to a graph of an image due to the square computation of the Markov matrix which is necessary for circulating the flow. The proposed pruning scheme has the advantages of faster computation, smaller memory footprint, and straightforward parallel implementation. Through comparisons with other recent techniques, we show that the proposed algorithm achieves state-of-the-art performance.
Document Ranking Based upon Markov Chains.
ERIC Educational Resources Information Center
Danilowicz, Czeslaw; Balinski, Jaroslaw
2001-01-01
Considers how the order of documents in information retrieval responses are determined and introduces a method that uses a probabilistic model of a document set where documents are regarded as states of a Markov chain and where transition probabilities are directly proportional to similarities between documents. (Author/LRW)
NASA Technical Reports Server (NTRS)
Smith, R. M.
1991-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.
An extensive Markov system for ECG exact coding.
Tai, S C
1995-02-01
In this paper, an extensive Markov process, which considers both the coding redundancy and the intersample redundancy, is presented to measure the entropy value of an ECG signal more accurately. It utilizes the intersample correlations by predicting the incoming n samples based on the previous m samples which constitute an extensive Markov process state. Theories of the extensive Markov process and conventional n repeated applications of m-th order Markov process are studied first in this paper. After that, they are realized for ECG exact coding. Results show that a better performance can be achieved by our system. The average code length for the extensive Markov system on the second difference signals was 2.512 b/sample, while the average Huffman code length for the second difference signals was 3.326 b/sample. PMID:7868151
General and specific consciousness: a first-order representationalist approach
Mehta, Neil; Mashour, George A.
2013-01-01
It is widely acknowledged that a complete theory of consciousness should explain general consciousness (what makes a state conscious at all) and specific consciousness (what gives a conscious state its particular phenomenal quality). We defend first-order representationalism, which argues that consciousness consists of sensory representations directly available to the subject for action selection, belief formation, planning, etc. We provide a neuroscientific framework for this primarily philosophical theory, according to which neural correlates of general consciousness include prefrontal cortex, posterior parietal cortex, and non-specific thalamic nuclei, while neural correlates of specific consciousness include sensory cortex and specific thalamic nuclei. We suggest that recent data support first-order representationalism over biological theory, higher-order representationalism, recurrent processing theory, information integration theory, and global workspace theory. PMID:23882231
Reduced-Order Modeling: New Approaches for Computational Physics
NASA Technical Reports Server (NTRS)
Beran, Philip S.; Silva, Walter A.
2001-01-01
In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
All-order approach to high-precision atomic calculation
NASA Astrophysics Data System (ADS)
Iskrenova-Tchoukova, Eugeniya
High-precision atomic calculations combined with experiments of matching accuracy provide an excellent opportunity to test our understanding of atomic structure and properties as well as the many-body atomic theories. The relativistic all-order method, which is a linearized version of the coupled-cluster singles-doubles method, has proven to yield high precision results for a variety of atomic properties. In this thesis, we study the atomic properties of neutral atoms and ions by means of the relativistic all-order method. The lifetimes and ground state static polarizabilities of a singly ionized barium atom are studied in comparison with the isoelectronic neutral cesium atom and with a singly ionized calcium atom. The lifetimes of a number of excited states in atomic potassium, rubidium, and francium are theoretically calculated and compared with the available experimental data. The magnetic dipole hyperfine constant of the 9S1/2 state in 210Fr is calculated and the result is combined with the experimental one to extract the value of the 210Fr nuclear magnetic moment. Another part of the thesis work focuses on the development and implementation of an extension of the currently used all-order singles-doubles (SD) method to include all valence triple excitations in an iterative way, all-order SD+vT approximation. Some of the ideas and results presented in Chapters 4, 5, and 6 have been published and are subject to copyright laws. These publications are cited accordingly.
Markov Modeling with Soft Aggregation for Safety and Decision Analysis
COOPER,J. ARLIN
1999-09-01
The methodology in this report improves on some of the limitations of many conventional safety assessment and decision analysis methods. A top-down mathematical approach is developed for decomposing systems and for expressing imprecise individual metrics as possibilistic or fuzzy numbers. A ''Markov-like'' model is developed that facilitates combining (aggregating) inputs into overall metrics and decision aids, also portraying the inherent uncertainty. A major goal of Markov modeling is to help convey the top-down system perspective. One of the constituent methodologies allows metrics to be weighted according to significance of the attribute and aggregated nonlinearly as to contribution. This aggregation is performed using exponential combination of the metrics, since the accumulating effect of such factors responds less and less to additional factors. This is termed ''soft'' mathematical aggregation. Dependence among the contributing factors is accounted for by incorporating subjective metrics on ''overlap'' of the factors as well as by correspondingly reducing the overall contribution of these combinations to the overall aggregation. Decisions corresponding to the meaningfulness of the results are facilitated in several ways. First, the results are compared to a soft threshold provided by a sigmoid function. Second, information is provided on input ''Importance'' and ''Sensitivity,'' in order to know where to place emphasis on considering new controls that may be necessary. Third, trends in inputs and outputs are tracked in order to obtain significant information% including cyclic information for the decision process. A practical example from the air transportation industry is used to demonstrate application of the methodology. Illustrations are given for developing a structure (along with recommended inputs and weights) for air transportation oversight at three different levels, for developing and using cycle information, for developing Importance and
Hidden Markov Models for Detecting Aseismic Events in Southern California
NASA Astrophysics Data System (ADS)
Granat, R.
2004-12-01
We employ a hidden Markov model (HMM) to segment surface displacement time series collection by the Southern California Integrated Geodetic Network (SCIGN). These segmented time series are then used to detect regional events by observing the number of simultaneous mode changes across the network; if a large number of stations change at the same time, that indicates an event. The hidden Markov model (HMM) approach assumes that the observed data has been generated by an unobservable dynamical statistical process. The process is of a particular form such that each observation is coincident with the system being in a particular discrete state, which is interpreted as a behavioral mode. The dynamics are the model are constructed so that the next state is directly dependent only on the current state -- it is a first order Markov process. The model is completely described by a set of parameters: the initial state probabilities, the first order Markov chain state-to-state transition probabilities, and the probability distribution of observable outputs associated with each state. The result of this approach is that our segmentation decisions are based entirely on statistical changes in the behavior of the observed daily displacements. In general, finding the optimal model parameters to fit the data is a difficult problem. We present an innovative model fitting method that is unsupervised (i.e., it requires no labeled training data) and uses a regularized version of the expectation-maximization (EM) algorithm to ensure that model solutions are both robust with respect to initial conditions and of high quality. We demonstrate the reliability of the method as compared to standard model fitting methods and show that it results in lower noise in the mode change correlation signal used to detect regional events. We compare candidate events detected by this method to the seismic record and observe that most are not correlated with a significant seismic event. Our analysis
Thermal nanostructure: An order parameter multiscale ensemble approach
NASA Astrophysics Data System (ADS)
Cheluvaraja, S.; Ortoleva, P.
2010-02-01
Deductive all-atom multiscale techniques imply that many nanosystems can be understood in terms of the slow dynamics of order parameters that coevolve with the quasiequilibrium probability density for rapidly fluctuating atomic configurations. The result of this multiscale analysis is a set of stochastic equations for the order parameters whose dynamics is driven by thermal-average forces. We present an efficient algorithm for sampling atomistic configurations in viruses and other supramillion atom nanosystems. This algorithm allows for sampling of a wide range of configurations without creating an excess of high-energy, improbable ones. It is implemented and used to calculate thermal-average forces. These forces are then used to search the free-energy landscape of a nanosystem for deep minima. The methodology is applied to thermal structures of Cowpea chlorotic mottle virus capsid. The method has wide applicability to other nanosystems whose properties are described by the CHARMM or other interatomic force field. Our implementation, denoted SIMNANOWORLD™, achieves calibration-free nanosystem modeling. Essential atomic-scale detail is preserved via a quasiequilibrium probability density while overall character is provided via predicted values of order parameters. Applications from virology to the computer-aided design of nanocapsules for delivery of therapeutic agents and of vaccines for nonenveloped viruses are envisioned.
A Recurrence Relation Approach to Higher Order Quantum Superintegrability
NASA Astrophysics Data System (ADS)
Kalnins, Ernie G.; Kress, Jonathan M.; Miller, Willard
2011-03-01
We develop our method to prove quantum superintegrability of an integrable 2D system, based on recurrence relations obeyed by the eigenfunctions of the system with respect to separable coordinates. We show that the method provides rigorous proofs of superintegrability and explicit constructions of higher order generators for the symmetry algebra. We apply the method to 5 families of systems, each depending on a parameter k, including most notably the caged anisotropic oscillator, the Tremblay, Turbiner and Winternitz system and a deformed Kepler-Coulomb system, and we give proofs of quantum superintegrability for all rational values of k, new for 4 of these systems. In addition, we show that the explicit information supplied by the special function recurrence relations allows us to prove, for the first time in 4 cases, that the symmetry algebra generated by our lowest order symmetries closes and to determine the associated structure equations of the algebras for each k. We have no proof that our generating symmetries are of lowest possible order, but we have no counterexamples, and we are confident we can can always find any missing generators from our raising and lowering operator recurrences. We also get for free, one variable models of the action of the symmetry algebra in terms of difference operators. We describe how the Stäckel transform acts and show that it preserves the structure equations.
NASA Astrophysics Data System (ADS)
Volchenkov, Dima; Dawin, Jean René
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
Using Markov state models to study self-assembly.
Perkett, Matthew R; Hagan, Michael F
2014-06-01
Markov state models (MSMs) have been demonstrated to be a powerful method for computationally studying intramolecular processes such as protein folding and macromolecular conformational changes. In this article, we present a new approach to construct MSMs that is applicable to modeling a broad class of multi-molecular assembly reactions. Distinct structures formed during assembly are distinguished by their undirected graphs, which are defined by strong subunit interactions. Spatial inhomogeneities of free subunits are accounted for using a recently developed Gaussian-based signature. Simplifications to this state identification are also investigated. The feasibility of this approach is demonstrated on two different coarse-grained models for virus self-assembly. We find good agreement between the dynamics predicted by the MSMs and long, unbiased simulations, and that the MSMs can reduce overall simulation time by orders of magnitude. PMID:24907984
Using Markov state models to study self-assembly
Perkett, Matthew R.; Hagan, Michael F.
2014-01-01
Markov state models (MSMs) have been demonstrated to be a powerful method for computationally studying intramolecular processes such as protein folding and macromolecular conformational changes. In this article, we present a new approach to construct MSMs that is applicable to modeling a broad class of multi-molecular assembly reactions. Distinct structures formed during assembly are distinguished by their undirected graphs, which are defined by strong subunit interactions. Spatial inhomogeneities of free subunits are accounted for using a recently developed Gaussian-based signature. Simplifications to this state identification are also investigated. The feasibility of this approach is demonstrated on two different coarse-grained models for virus self-assembly. We find good agreement between the dynamics predicted by the MSMs and long, unbiased simulations, and that the MSMs can reduce overall simulation time by orders of magnitude. PMID:24907984
Stochastic motif extraction using hidden Markov model
Fujiwara, Yukiko; Asogawa, Minoru; Konagaya, Akihiko
1994-12-31
In this paper, we study the application of an HMM (hidden Markov model) to the problem of representing protein sequences by a stochastic motif. A stochastic protein motif represents the small segments of protein sequences that have a certain function or structure. The stochastic motif, represented by an HMM, has conditional probabilities to deal with the stochastic nature of the motif. This HMM directive reflects the characteristics of the motif, such as a protein periodical structure or grouping. In order to obtain the optimal HMM, we developed the {open_quotes}iterative duplication method{close_quotes} for HMM topology learning. It starts from a small fully-connected network and iterates the network generation and parameter optimization until it achieves sufficient discrimination accuracy. Using this method, we obtained an HMM for a leucine zipper motif. Compared to the accuracy of a symbolic pattern representation with accuracy of 14.8 percent, an HMM achieved 79.3 percent in prediction. Additionally, the method can obtain an HMM for various types of zinc finger motifs, and it might separate the mixed data. We demonstrated that this approach is applicable to the validation of the protein databases; a constructed HMM b as indicated that one protein sequence annotated as {open_quotes}lencine-zipper like sequence{close_quotes} in the database is quite different from other leucine-zipper sequences in terms of likelihood, and we found this discrimination is plausible.
Optimal q-Markov COVER for finite precision implementation
NASA Technical Reports Server (NTRS)
Williamson, Darrell; Skelton, Robert E.
1989-01-01
The existing q-Markov COVER realization theory does not take into account the problems of arithmetic errors due to both the quantization of states and coefficients of the reduced order model. All q-Markov COVERs allow some freedom in the choice of parameters. Here, researchers exploit this freedom in the existing theory to optimize the models with respect to these finite wordlength effects.
Markov Chains For Testing Redundant Software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1990-01-01
Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.
Koukounari, Artemis; Donnelly, Christl A.; Moustaki, Irini; Tukahebwa, Edridah M.; Kabatereine, Narcis B.; Wilson, Shona; Webster, Joanne P.; Deelder, André M.; Vennervald, Birgitte J.; van Dam, Govert J.
2013-01-01
Regular treatment with praziquantel (PZQ) is the strategy for human schistosomiasis control aiming to prevent morbidity in later life. With the recent resolution on schistosomiasis elimination by the 65th World Health Assembly, appropriate diagnostic tools to inform interventions are keys to their success. We present a discrete Markov chains modelling framework that deals with the longitudinal study design and the measurement error in the diagnostic methods under study. A longitudinal detailed dataset from Uganda, in which one or two doses of PZQ treatment were provided, was analyzed through Latent Markov Models (LMMs). The aim was to evaluate the diagnostic accuracy of Circulating Cathodic Antigen (CCA) and of double Kato-Katz (KK) faecal slides over three consecutive days for Schistosoma mansoni infection simultaneously by age group at baseline and at two follow-up times post treatment. Diagnostic test sensitivities and specificities and the true underlying infection prevalence over time as well as the probabilities of transitions between infected and uninfected states are provided. The estimated transition probability matrices provide parsimonious yet important insights into the re-infection and cure rates in the two age groups. We show that the CCA diagnostic performance remained constant after PZQ treatment and that this test was overall more sensitive but less specific than single-day double KK for the diagnosis of S. mansoni infection. The probability of clearing infection from baseline to 9 weeks was higher among those who received two PZQ doses compared to one PZQ dose for both age groups, with much higher re-infection rates among children compared to adolescents and adults. We recommend LMMs as a useful methodology for monitoring and evaluation and treatment decision research as well as CCA for mapping surveys of S. mansoni infection, although additional diagnostic tools should be incorporated in schistosomiasis elimination programs. PMID:24367250
A semi-Markov model for price returns
NASA Astrophysics Data System (ADS)
D'Amico, Guglielmo; Petroni, Filippo
2012-10-01
We study the high frequency price dynamics of traded stocks by a model of returns using a semi-Markov approach. More precisely we assume that the intraday returns are described by a discrete time homogeneous semi-Markov process and the overnight returns are modeled by a Markov chain. Based on this assumptions we derived the equations for the first passage time distribution and the volatility autocorrelation function. Theoretical results have been compared with empirical findings from real data. In particular we analyzed high frequency data from the Italian stock market from 1 January 2007 until the end of December 2010. The semi-Markov hypothesis is also tested through a nonparametric test of hypothesis.
A non-homogeneous Markov model for phased-mission reliability analysis
NASA Technical Reports Server (NTRS)
Smotherman, Mark; Zemoudeh, Kay
1989-01-01
Three assumptions of Markov modeling for reliability of phased-mission systems that limit flexibility of representation are identified. The proposed generalization has the ability to represent state-dependent behavior, handle phases of random duration using globally time-dependent distributions of phase change time, and model globally time-dependent failure and repair rates. The approach is based on a single nonhomogeneous Markov model in which the concept of state transition is extended to include globally time-dependent phase changes. Phase change times are specified using nonoverlapping distributions with probability distribution functions that are zero outside assigned time intervals; the time intervals are ordered according to the phases. A comparison between a numerical solution of the model and simulation demonstrates that the numerical solution can be several times faster than simulation.
A path-independent method for barrier option pricing in hidden Markov models
NASA Astrophysics Data System (ADS)
Rashidi Ranjbar, Hedieh; Seifi, Abbas
2015-12-01
This paper presents a method for barrier option pricing under a Black-Scholes model with Markov switching. We extend the option pricing method of Buffington and Elliott to price continuously monitored barrier options under a Black-Scholes model with regime switching. We use a regime switching random Esscher transform in order to determine an equivalent martingale pricing measure, and then solve the resulting multidimensional integral for pricing barrier options. We have calculated prices for down-and-out call options under a two-state hidden Markov model using two different Monte-Carlo simulation approaches and the proposed method. A comparison of the results shows that our method is faster than Monte-Carlo simulation methods.
Estimating Neuronal Ageing with Hidden Markov Models
NASA Astrophysics Data System (ADS)
Wang, Bing; Pham, Tuan D.
2011-06-01
Neuronal degeneration is widely observed in normal ageing, meanwhile the neurode-generative disease like Alzheimer's disease effects neuronal degeneration in a faster way which is considered as faster ageing. Early intervention of such disease could benefit subjects with potentials of positive clinical outcome, therefore, early detection of disease related brain structural alteration is required. In this paper, we propose a computational approach for modelling the MRI-based structure alteration with ageing using hidden Markov model. The proposed hidden Markov model based brain structural model encodes intracortical tissue/fluid distribution using discrete wavelet transformation and vector quantization. Further, it captures gray matter volume loss, which is capable of reflecting subtle intracortical changes with ageing. Experiments were carried out on healthy subjects to validate its accuracy and robustness. Results have shown its ability of predicting the brain age with prediction error of 1.98 years without training data, which shows better result than other age predition methods.
Markov Processes: Linguistics and Zipf's Law
NASA Astrophysics Data System (ADS)
Kanter, I.; Kessler, D. A.
1995-05-01
It is shown that a 2-parameter random Markov process constructed with N states and biased random transitions gives rise to a stationary distribution where the probabilities of occurrence of the states, P\\(k\\), k = 1,...,N, exhibit the following three universal behaviors which characterize biological sequences and texts in natural languages: (a) the rank-ordered frequencies of occurrence of words are given by Zipf's law P\\(k\\)~1/kρ, where ρ\\(k\\) is slowly increasing for small k; (b) the frequencies of occurrence of letters are given by P\\(k\\) = A-Dln\\(k\\); and (c) long-range correlations are observed over long but finite intervals, as a result of the quasiergodicity of the Markov process.
Decentralized learning in Markov games.
Vrancx, Peter; Verbeeck, Katja; Nowé, Ann
2008-08-01
Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games--a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies. PMID:18632387
A Markov Model for Assessing the Reliability of a Digital Feedwater Control System
Chu,T.L.; Yue, M.; Martinez-Guridi, G.; Lehner, J.
2009-02-11
A Markov approach has been selected to represent and quantify the reliability model of a digital feedwater control system (DFWCS). The system state, i.e., whether a system fails or not, is determined by the status of the components that can be characterized by component failure modes. Starting from the system state that has no component failure, possible transitions out of it are all failure modes of all components in the system. Each additional component failure mode will formulate a different system state that may or may not be a system failure state. The Markov transition diagram is developed by strictly following the sequences of component failures (i.e., failure sequences) because the different orders of the same set of failures may affect the system in completely different ways. The formulation and quantification of the Markov model, together with the proposed FMEA (Failure Modes and Effects Analysis) approach, and the development of the supporting automated FMEA tool are considered the three major elements of a generic conceptual framework under which the reliability of digital systems can be assessed.
Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.
Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka
2014-02-01
In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain. PMID:24246289
Markov models and the ensemble Kalman filter for estimation of sorption rates.
Vugrin, Eric D.; McKenna, Sean Andrew; Vugrin, Kay White
2007-09-01
Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.
Metrics for Labeled Markov Systems
NASA Technical Reports Server (NTRS)
Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash
1999-01-01
Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.
Raberto, Marco; Rapallo, Fabio; Scalas, Enrico
2011-01-01
In this paper, we outline a model of graph (or network) dynamics based on two ingredients. The first ingredient is a Markov chain on the space of possible graphs. The second ingredient is a semi-Markov counting process of renewal type. The model consists in subordinating the Markov chain to the semi-Markov counting process. In simple words, this means that the chain transitions occur at random time instants called epochs. The model is quite rich and its possible connections with algebraic geometry are briefly discussed. Moreover, for the sake of simplicity, we focus on the space of undirected graphs with a fixed number of nodes. However, in an example, we present an interbank market model where it is meaningful to use directed graphs or even weighted graphs. PMID:21887245
Hidden Markov Model Analysis of Multichromophore Photobleaching
Messina, Troy C.; Kim, Hiyun; Giurleo, Jason T.; Talaga, David S.
2007-01-01
The interpretation of single-molecule measurements is greatly complicated by the presence of multiple fluorescent labels. However, many molecular systems of interest consist of multiple interacting components. We investigate this issue using multiply labeled dextran polymers that we intentionally photobleach to the background on a single-molecule basis. Hidden Markov models allow for unsupervised analysis of the data to determine the number of fluorescent subunits involved in the fluorescence intermittency of the 6-carboxy-tetramethylrhodamine labels by counting the discrete steps in fluorescence intensity. The Bayes information criterion allows us to distinguish between hidden Markov models that differ by the number of states, that is, the number of fluorescent molecules. We determine information-theoretical limits and show via Monte Carlo simulations that the hidden Markov model analysis approaches these theoretical limits. This technique has resolving power of one fluorescing unit up to as many as 30 fluorescent dyes with the appropriate choice of dye and adequate detection capability. We discuss the general utility of this method for determining aggregation-state distributions as could appear in many biologically important systems and its adaptability to general photometric experiments. PMID:16913765
Phase transitions in Hidden Markov Models
NASA Astrophysics Data System (ADS)
Bechhoefer, John; Lathouwers, Emma
In Hidden Markov Models (HMMs), a Markov process is not directly accessible. In the simplest case, a two-state Markov model ``emits'' one of two ``symbols'' at each time step. We can think of these symbols as noisy measurements of the underlying state. With some probability, the symbol implies that the system is in one state when it is actually in the other. The ability to judge which state the system is in sets the efficiency of a Maxwell demon that observes state fluctuations in order to extract heat from a coupled reservoir. The state-inference problem is to infer the underlying state from such noisy measurements at each time step. We show that there can be a phase transition in such measurements: for measurement error rates below a certain threshold, the inferred state always matches the observation. For higher error rates, there can be continuous or discontinuous transitions to situations where keeping a memory of past observations improves the state estimate. We can partly understand this behavior by mapping the HMM onto a 1d random-field Ising model at zero temperature. We also present more recent work that explores a larger parameter space and more states. Research funded by NSERC, Canada.
A reward semi-Markov process with memory for wind speed modeling
NASA Astrophysics Data System (ADS)
Petroni, F.; D'Amico, G.; Prattico, F.
2012-04-01
Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.
Generator estimation of Markov jump processes
NASA Astrophysics Data System (ADS)
Metzner, P.; Dittmer, E.; Jahnke, T.; Schütte, Ch.
2007-11-01
Estimating the generator of a continuous-time Markov jump process based on incomplete data is a problem which arises in various applications ranging from machine learning to molecular dynamics. Several methods have been devised for this purpose: a quadratic programming approach (cf. [D.T. Crommelin, E. Vanden-Eijnden, Fitting timeseries by continuous-time Markov chains: a quadratic programming approach, J. Comp. Phys. 217 (2006) 782-805]), a resolvent method (cf. [T. Müller, Modellierung von Proteinevolution, PhD thesis, Heidelberg, 2001]), and various implementations of an expectation-maximization algorithm ([S. Asmussen, O. Nerman, M. Olsson, Fitting phase-type distributions via the EM algorithm, Scand. J. Stat. 23 (1996) 419-441; I. Holmes, G.M. Rubin, An expectation maximization algorithm for training hidden substitution models, J. Mol. Biol. 317 (2002) 753-764; U. Nodelman, C.R. Shelton, D. Koller, Expectation maximization and complex duration distributions for continuous time Bayesian networks, in: Proceedings of the twenty-first conference on uncertainty in AI (UAI), 2005, pp. 421-430; M. Bladt, M. Sørensen, Statistical inference for discretely observed Markov jump processes, J.R. Statist. Soc. B 67 (2005) 395-410]). Some of these methods, however, seem to be known only in a particular research community, and have later been reinvented in a different context. The purpose of this paper is to compile a catalogue of existing approaches, to compare the strengths and weaknesses, and to test their performance in a series of numerical examples. These examples include carefully chosen model problems and an application to a time series from molecular dynamics.
Teaching Higher Order Thinking in the Introductory MIS Course: A Model-Directed Approach
ERIC Educational Resources Information Center
Wang, Shouhong; Wang, Hai
2011-01-01
One vision of education evolution is to change the modes of thinking of students. Critical thinking, design thinking, and system thinking are higher order thinking paradigms that are specifically pertinent to business education. A model-directed approach to teaching and learning higher order thinking is proposed. An example of application of the…
On Markov parameters in system identification
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.
NASA Astrophysics Data System (ADS)
Lalande, Jean-Marie; Waxler, Roger; Velea, Doru
2016-04-01
As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.
Markov counting models for correlated binary responses.
Crawford, Forrest W; Zelterman, Daniel
2015-07-01
We propose a class of continuous-time Markov counting processes for analyzing correlated binary data and establish a correspondence between these models and sums of exchangeable Bernoulli random variables. Our approach generalizes many previous models for correlated outcomes, admits easily interpretable parameterizations, allows different cluster sizes, and incorporates ascertainment bias in a natural way. We demonstrate several new models for dependent outcomes and provide algorithms for computing maximum likelihood estimates. We show how to incorporate cluster-specific covariates in a regression setting and demonstrate improved fits to well-known datasets from familial disease epidemiology and developmental toxicology. PMID:25792624
A new approach to the implementation of direct care-provider order entry.
Geissbühler, A.; Miller, R. A.
1996-01-01
Successful implementation of direct computer-based care-provider order entry traditionally relies on one of two different approaches development from scratch or installation of a commercial product. The former requires extensive resources; the latter, by its proprietary nature, limits extension of the system beyond capabilities supplied by the vendor. This paper describes an intermediate approach using the association of a locally-developed and controlled set of distributed microcomputer based applications and a commercial, mainframe-based order entry application used as an order transaction processing system. This combination provides both an intuitive user interface and a platform for implementing clinical decision-support tools. PMID:8947753
Alignment of multiple proteins with an ensemble of Hidden Markov Models
Song, Yinglei; Qu, Junfeng; Hura, Gurdeep S.
2011-01-01
In this paper, we developed a new method that progressively construct and update a set of alignments by adding sequences in certain order to each of the existing alignments. Each of the existing alignments is modelled with a profile Hidden Markov Model (HMM) and an added sequence is aligned to each of these profile HMMs. We introduced an integer parameter for the number of profile HMMs. The profile HMMs are then updated based on the alignments with leading scores. Our experiments on BaliBASE showed that our approach could efficiently explore the alignment space and significantly improve the alignment accuracy. PMID:20376922
NASA Astrophysics Data System (ADS)
Granat, R. A.; Clayton, R.; Kedar, S.; Kaneko, Y.
2003-12-01
We employ a robust hidden Markov model (HMM) based technique to perform statistical pattern analysis of suspected seismic and aseismic events in the poorly explored period band of minutes to hours. The technique allows us to classify known events and provides a statistical basis for finding and cataloging similar events represented elsewhere in the observations. In this work, we focus on data collected by the Southern California TriNet system. The hidden Markov model (HMM) approach assumes that the observed data has been generated by an unobservable dynamical statistical process. The process is of a particular form such that each observation is coincident with the system being in a particular discrete state. The dynamics are the model are constructed so that the next state is directly dependent only on the current state -- it is a first order Markov process. The model is completely described by a set of parameters: the initial state probabilities, the first order Markov chain state-to-state transition probabilities, and the probability distribution of observable outputs associated with each state. Application of the model to data involves optimizing these model parameters with respect to some function of the observations, typically the likelihood of the observations given the model. Our work focused on the fact that this objective function has a number of local maxima that is exponential in the model size (the number of states). This means that not only is it very difficult to discover the global maximum, but also that results can vary widely between applications of the model. For some domains which employ HMMs for such purposes, such as speech processing, sufficient a priori information about the system is available to avoid this problem. However, for seismic data in general such a priori information is not available. Our approach involves analytical location of sub-optimal local maxima; once the locations of these maxima have been found, then we can employ a
Likelihood free inference for Markov processes: a comparison.
Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S
2015-04-01
Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space. PMID:25720092
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T.; Pande, Vijay S.
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016
Assessment of optimized Markov models in protein fold classification.
Lampros, Christos; Simos, Thomas; Exarchos, Themis P; Exarchos, Konstantinos P; Papaloukas, Costas; Fotiadis, Dimitrios I
2014-08-01
Protein fold classification is a challenging task strongly associated with the determination of proteins' structure. In this work, we tested an optimization strategy on a Markov chain and a recently introduced Hidden Markov Model (HMM) with reduced state-space topology. The proteins with unknown structure were scored against both these models. Then the derived scores were optimized following a local optimization method. The Protein Data Bank (PDB) and the annotation of the Structural Classification of Proteins (SCOP) database were used for the evaluation of the proposed methodology. The results demonstrated that the fold classification accuracy of the optimized HMM was substantially higher compared to that of the Markov chain or the reduced state-space HMM approaches. The proposed methodology achieved an accuracy of 41.4% on fold classification, while Sequence Alignment and Modeling (SAM), which was used for comparison, reached an accuracy of 38%. PMID:25152041
Students' Progress throughout Examination Process as a Markov Chain
ERIC Educational Resources Information Center
Hlavatý, Robert; Dömeová, Ludmila
2014-01-01
The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…
On a Result for Finite Markov Chains
ERIC Educational Resources Information Center
Kulathinal, Sangita; Ghosh, Lagnojita
2006-01-01
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…
Influence of credit scoring on the dynamics of Markov chain
NASA Astrophysics Data System (ADS)
Galina, Timofeeva
2015-11-01
Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.
Nonparametric identification and maximum likelihood estimation for hidden Markov models
Alexandrovich, G.; Holzmann, H.; Leister, A.
2016-01-01
Nonparametric identification and maximum likelihood estimation for finite-state hidden Markov models are investigated. We obtain identification of the parameters as well as the order of the Markov chain if the transition probability matrices have full-rank and are ergodic, and if the state-dependent distributions are all distinct, but not necessarily linearly independent. Based on this identification result, we develop a nonparametric maximum likelihood estimation theory. First, we show that the asymptotic contrast, the Kullback–Leibler divergence of the hidden Markov model, also identifies the true parameter vector nonparametrically. Second, for classes of state-dependent densities which are arbitrary mixtures of a parametric family, we establish the consistency of the nonparametric maximum likelihood estimator. Here, identification of the mixing distributions need not be assumed. Numerical properties of the estimates and of nonparametric goodness of fit tests are investigated in a simulation study.
Bayesian restoration of a hidden Markov chain with applications to DNA sequencing.
Churchill, G A; Lazareva, B
1999-01-01
Hidden Markov models (HMMs) are a class of stochastic models that have proven to be powerful tools for the analysis of molecular sequence data. A hidden Markov model can be viewed as a black box that generates sequences of observations. The unobservable internal state of the box is stochastic and is determined by a finite state Markov chain. The observable output is stochastic with distribution determined by the state of the hidden Markov chain. We present a Bayesian solution to the problem of restoring the sequence of states visited by the hidden Markov chain from a given sequence of observed outputs. Our approach is based on a Monte Carlo Markov chain algorithm that allows us to draw samples from the full posterior distribution of the hidden Markov chain paths. The problem of estimating the probability of individual paths and the associated Monte Carlo error of these estimates is addressed. The method is illustrated by considering a problem of DNA sequence multiple alignment. The special structure for the hidden Markov model used in the sequence alignment problem is considered in detail. In conclusion, we discuss certain interesting aspects of biological sequence alignments that become accessible through the Bayesian approach to HMM restoration. PMID:10421527
Robust Hidden Markov Models for Geophysical Data Analysis
NASA Astrophysics Data System (ADS)
Granat, R. A.
2002-12-01
We employed robust hidden Markov models (HMMs) to perform statistical analysis of seismic events and crustal deformation. These models allowed us to classify different kinds of events or modes of deformation, and furthermore gave us a statistical basis for understanding relationships between different classes. A hidden Markov model is a statistical model for ordered data (typically in time). The observed data is assumed to have been generated by an unobservable statistical process of a particular form. This process is such that each observation is coincident with the system being in a particular discrete state. Furthermore, the next state is dependent on the current state; in other words, it is a first order Markov process. The model is completely described by a set of model parameters: the initial state probabilities, the first order Markov chain state-to-state transition probabilities, and the probabilities of observable outputs associated with each state. Application of the model to data involves optimizing these model parameters with respect to some function of the observations, typically the likelihood of the observations given the model. Our work focused on the fact that this objective function typically has a number of local maxima that is exponential in the model size (the number of states). This means that not only is it very difficult to discover the global maximum, but also that results can vary widely between applications of the model. For some domains, such as speech processing, sufficient a priori information about the system is available such that this problem can be avoided. However, for general scientific analysis, such a priori information is often not available, especially in cases where the HMM is being used as an exploratory tool for scientific understanding. Such was the case for the geophysical data sets used in this work. Our approach involves analytical location of sub-optimal local maxima; once the locations of these maxima have been found
A fourth order spline collocation approach for a business cycle model
NASA Astrophysics Data System (ADS)
Sayfy, A.; Khoury, S.; Ibdah, H.
2013-10-01
A collocation approach, based on a fourth order cubic B-splines is presented for the numerical solution of a Kaleckian business cycle model formulated by a nonlinear delay differential equation. The equation is approximated and the nonlinearity is handled by employing an iterative scheme arising from Newton's method. It is shown that the model exhibits a conditionally dynamical stable cycle. The fourth-order rate of convergence of the scheme is verified numerically for different special cases.
Modelling modal gating of ion channels with hierarchical Markov models
Fackrell, Mark; Crampin, Edmund J.; Taylor, Peter
2016-01-01
Many ion channels spontaneously switch between different levels of activity. Although this behaviour known as modal gating has been observed for a long time it is currently not well understood. Despite the fact that appropriately representing activity changes is essential for accurately capturing time course data from ion channels, systematic approaches for modelling modal gating are currently not available. In this paper, we develop a modular approach for building such a model in an iterative process. First, stochastic switching between modes and stochastic opening and closing within modes are represented in separate aggregated Markov models. Second, the continuous-time hierarchical Markov model, a new modelling framework proposed here, then enables us to combine these components so that in the integrated model both mode switching as well as the kinetics within modes are appropriately represented. A mathematical analysis reveals that the behaviour of the hierarchical Markov model naturally depends on the properties of its components. We also demonstrate how a hierarchical Markov model can be parametrized using experimental data and show that it provides a better representation than a previous model of the same dataset. Because evidence is increasing that modal gating reflects underlying molecular properties of the channel protein, it is likely that biophysical processes are better captured by our new approach than in earlier models. PMID:27616917
[Towards a clinical approach in institutions in order to enable dreams].
Ponroy, Annabelle
2013-01-01
Care protocols and their proliferation tend to dampen the enthusiasm of professionals in their daily practice. An institution's clinical approach must be designed in terms of admission in order notto leave madness on the threshold of care. Trusting the enthusiasm and desire of nurses means favouring creativity within practices. PMID:24059145
Contemplative Practices and Orders of Consciousness: A Constructive-Developmental Approach
ERIC Educational Resources Information Center
Silverstein, Charles H.
2012-01-01
This qualitative study explores the correspondence between contemplative practices and "orders of consciousness" from a constructive-developmental perspective, using Robert Kegan's approach. Adult developmental growth is becoming an increasingly important influence on humanity's ability to deal effectively with the growing…
NASA Astrophysics Data System (ADS)
Garcia Fernandez, M.; Butala, M.; Komjathy, A.; Desai, S. D.
2012-12-01
Correcting GNSS tracking data for the effects of second order ionospheric effects have been shown to cause a southward shift in GNSS-based precise point positioning solutions by as much as 10 mm, depending on the solar cycle conditions. The most commonly used approaches for modeling the higher order ionospheric effect include, (a) the use of global ionosphere maps to determine vertical total electron content (VTEC) and convert to slant TEC (STEC) assuming a thin shell ionosphere, and (b) using the dual-frequency measurements themselves to determine STEC. The latter approach benefits from not requiring ionospheric mapping functions between VTEC and STEC. However, this approach will require calibrations with receiver and transmitter Differential Code Biases (DCBs). We present results from comparisons of the two approaches. For the first approach, we also compare the use of VTEC observations from IONEX maps compared to climatological model-derived VTEC as provided by the International Reference Ionosphere (IRI2012). We consider various metrics to evaluate the relative performance of the different approaches, including station repeatability, GNSS-based reference frame recovery, and post-fit measurement residuals. Overall, the GIM-based approaches tend to provide lower noise in second order ionosphere correction and positioning solutions. The use of IONEX and IRI2012 models of VTEC provide similar results, especially in periods of low solar activity periods. The use of the IRI2012 model provides a convenient approach for operational scenarios by eliminating the dependence on routine updates of the GIMs, and also serves as a useful source of VTEC when IONEX maps may not be readily available.
A unified approach for a posteriori high-order curved mesh generation using solid mechanics
NASA Astrophysics Data System (ADS)
Poya, Roman; Sevilla, Ruben; Gil, Antonio J.
2016-06-01
The paper presents a unified approach for the a posteriori generation of arbitrary high-order curvilinear meshes via a solid mechanics analogy. The approach encompasses a variety of methodologies, ranging from the popular incremental linear elastic approach to very sophisticated non-linear elasticity. In addition, an intermediate consistent incrementally linearised approach is also presented and applied for the first time in this context. Utilising a consistent derivation from energy principles, a theoretical comparison of the various approaches is presented which enables a detailed discussion regarding the material characterisation (calibration) employed for the different solid mechanics formulations. Five independent quality measures are proposed and their relations with existing quality indicators, used in the context of a posteriori mesh generation, are discussed. Finally, a comprehensive range of numerical examples, both in two and three dimensions, including challenging geometries of interest to the solids, fluids and electromagnetics communities, are shown in order to illustrate and thoroughly compare the performance of the different methodologies. This comparison considers the influence of material parameters and number of load increments on the quality of the generated high-order mesh, overall computational cost and, crucially, the approximation properties of the resulting mesh when considering an isoparametric finite element formulation.
A unified approach for a posteriori high-order curved mesh generation using solid mechanics
NASA Astrophysics Data System (ADS)
Poya, Roman; Sevilla, Ruben; Gil, Antonio J.
2016-09-01
The paper presents a unified approach for the a posteriori generation of arbitrary high-order curvilinear meshes via a solid mechanics analogy. The approach encompasses a variety of methodologies, ranging from the popular incremental linear elastic approach to very sophisticated non-linear elasticity. In addition, an intermediate consistent incrementally linearised approach is also presented and applied for the first time in this context. Utilising a consistent derivation from energy principles, a theoretical comparison of the various approaches is presented which enables a detailed discussion regarding the material characterisation (calibration) employed for the different solid mechanics formulations. Five independent quality measures are proposed and their relations with existing quality indicators, used in the context of a posteriori mesh generation, are discussed. Finally, a comprehensive range of numerical examples, both in two and three dimensions, including challenging geometries of interest to the solids, fluids and electromagnetics communities, are shown in order to illustrate and thoroughly compare the performance of the different methodologies. This comparison considers the influence of material parameters and number of load increments on the quality of the generated high-order mesh, overall computational cost and, crucially, the approximation properties of the resulting mesh when considering an isoparametric finite element formulation.
Hidden order and flux attachment in symmetry-protected topological phases: A Laughlin-like approach
NASA Astrophysics Data System (ADS)
Ringel, Zohar; Simon, Steven H.
2015-05-01
Topological phases of matter are distinct from conventional ones by their lack of a local order parameter. Still in the quantum Hall effect, hidden order parameters exist and constitute the basis for the celebrated composite-particle approach. Whether similar hidden orders exist in 2D and 3D symmetry protected topological phases (SPTs) is a largely open question. Here, we introduce a new approach for generating SPT ground states, based on a generalization of the Laughlin wave function. This approach gives a simple and unifying picture of some classes of SPTs in 1D and 2D, and reveals their hidden order and flux attachment structures. For the 1D case, we derive exact relations between the wave functions obtained in this manner and group cohomology wave functions, as well as matrix product state classification. For the 2D Ising SPT, strong analytical and numerical evidence is given to show that the wave function obtained indeed describes the desired SPT. The Ising SPT then appears as a state with quasi-long-range order in composite degrees of freedom consisting of Ising-symmetry charges attached to Ising-symmetry fluxes.
A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2009-01-01
We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.
Markov Chain Monte Carlo and Irreversibility
NASA Astrophysics Data System (ADS)
Ottobre, Michela
2016-06-01
Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.
A Markov model of the Indus script
Rao, Rajesh P. N.; Yadav, Nisha; Vahia, Mayank N.; Joglekar, Hrishikesh; Adhikari, R.; Mahadevan, Iravatham
2009-01-01
Although no historical information exists about the Indus civilization (flourished ca. 2600–1900 B.C.), archaeologists have uncovered about 3,800 short samples of a script that was used throughout the civilization. The script remains undeciphered, despite a large number of attempts and claimed decipherments over the past 80 years. Here, we propose the use of probabilistic models to analyze the structure of the Indus script. The goal is to reveal, through probabilistic analysis, syntactic patterns that could point the way to eventual decipherment. We illustrate the approach using a simple Markov chain model to capture sequential dependencies between signs in the Indus script. The trained model allows new sample texts to be generated, revealing recurring patterns of signs that could potentially form functional subunits of a possible underlying language. The model also provides a quantitative way of testing whether a particular string belongs to the putative language as captured by the Markov model. Application of this test to Indus seals found in Mesopotamia and other sites in West Asia reveals that the script may have been used to express different content in these regions. Finally, we show how missing, ambiguous, or unreadable signs on damaged objects can be filled in with most likely predictions from the model. Taken together, our results indicate that the Indus script exhibits rich synactic structure and the ability to represent diverse content. both of which are suggestive of a linguistic writing system rather than a nonlinguistic symbol system. PMID:19666571
A Markov model of the Indus script.
Rao, Rajesh P N; Yadav, Nisha; Vahia, Mayank N; Joglekar, Hrishikesh; Adhikari, R; Mahadevan, Iravatham
2009-08-18
Although no historical information exists about the Indus civilization (flourished ca. 2600-1900 B.C.), archaeologists have uncovered about 3,800 short samples of a script that was used throughout the civilization. The script remains undeciphered, despite a large number of attempts and claimed decipherments over the past 80 years. Here, we propose the use of probabilistic models to analyze the structure of the Indus script. The goal is to reveal, through probabilistic analysis, syntactic patterns that could point the way to eventual decipherment. We illustrate the approach using a simple Markov chain model to capture sequential dependencies between signs in the Indus script. The trained model allows new sample texts to be generated, revealing recurring patterns of signs that could potentially form functional subunits of a possible underlying language. The model also provides a quantitative way of testing whether a particular string belongs to the putative language as captured by the Markov model. Application of this test to Indus seals found in Mesopotamia and other sites in West Asia reveals that the script may have been used to express different content in these regions. Finally, we show how missing, ambiguous, or unreadable signs on damaged objects can be filled in with most likely predictions from the model. Taken together, our results indicate that the Indus script exhibits rich synactic structure and the ability to represent diverse content. both of which are suggestive of a linguistic writing system rather than a nonlinguistic symbol system. PMID:19666571
Approximating Markov Chains: What and why
Pincus, S.
1996-06-01
Much of the current study of dynamical systems is focused on geometry (e.g., chaos and bifurcations) and ergodic theory. Yet dynamical systems were originally motivated by an attempt to {open_quote}{open_quote}solve,{close_quote}{close_quote} or at least understand, a discrete-time analogue of differential equations. As such, numerical, analytical solution techniques for dynamical systems would seem desirable. We discuss an approach that provides such techniques, the approximation of dynamical systems by suitable finite state Markov Chains. Steady state distributions for these Markov Chains, a straightforward calculation, will converge to the true dynamical system steady state distribution, with appropriate limit theorems indicated. Thus (i) approximation by a computable, linear map holds the promise of vastly faster steady state solutions for nonlinear, multidimensional differential equations; (ii) the solution procedure is unaffected by the presence or absence of a probability density function for the {ital attractor}, entirely skirting singularity, fractal/multifractal, and renormalization considerations. The theoretical machinery underpinning this development also implies that under very general conditions, steady state measures are weakly continuous with control parameter evolution. This means that even though a system may change periodicity, or become chaotic in its limiting behavior, such statistical parameters as the mean, standard deviation, and tail probabilities change continuously, not abruptly with system evolution. {copyright} {ital 1996 American Institute of Physics.}
Equilibrium Control Policies for Markov Chains
Malikopoulos, Andreas
2011-01-01
The average cost criterion has held great intuitive appeal and has attracted considerable attention. It is widely employed when controlling dynamic systems that evolve stochastically over time by means of formulating an optimization problem to achieve long-term goals efficiently. The average cost criterion is especially appealing when the decision-making process is long compared to other timescales involved, and there is no compelling motivation to select short-term optimization. This paper addresses the problem of controlling a Markov chain so as to minimize the average cost per unit time. Our approach treats the problem as a dual constrained optimization problem. We derive conditions guaranteeing that a saddle point exists for the new dual problem and we show that this saddle point is an equilibrium control policy for each state of the Markov chain. For practical situations with constraints consistent to those we study here, our results imply that recognition of such saddle points may be of value in deriving in real time an optimal control policy.
Activity of Excitatory Neuron with Delayed Feedback Stimulated with Poisson Stream is Non-Markov
NASA Astrophysics Data System (ADS)
Vidybida, Alexander K.
2015-09-01
For a class of excitatory spiking neuron models with delayed feedback fed with a Poisson stochastic process, it is proven that the stream of output interspike intervals cannot be presented as a Markov process of any order.
Higher-order prediction terms and fixing the renormalization scale using the BLM approach
NASA Astrophysics Data System (ADS)
Mirjalili, Abolfazl; Khellat, Mohammad Reza
2014-12-01
There is an ambiguity in the perturbative series of QCD observables on how to choose the renormalization and even more the factorization scale. There are many approaches to overcome this obstacle and to fix the scales. Among them, there is the Brodsky-Lepage-Mackenzie (BLM) approach which is based on an intriguing principle. Based on the BLM approach, we intend to absorb the nf-terms in the pQCD series that rightly determines the running behavior of the running coupling into the running coupling. We make an extensive use of the BLM approach to investigate the details of predicting higher order correction terms of some QCD observables. By this way we test different methods to improve the prediction process. It is also found out that an overall normalization could change BLM predictions effectively.
NASA Astrophysics Data System (ADS)
Bauer, Johannes; Sachdev, Subir
2015-08-01
We study charge-ordered solutions for fermions on a square lattice interacting with dynamic antiferromagnetic fluctuations. Our approach is based on real-space Eliashberg equations, which are solved self-consistently. We first show that the antiferromagnetic fluctuations can induce arc features in the spectral functions, as spectral weight is suppressed at the hot spots; however, no real pseudogap is generated. At low temperature, spontaneous charge order with a d -form factor can be stabilized for certain parameters. As long as the interacting Fermi surfaces possess hot spots, the ordering wave vector corresponds to the diagonal connection of the hot spots, similar to the non-self-consistent case. Tendencies towards observed axial order only appear in situations without hot spots.
The Analysis of Rush Orders Risk in Supply Chain: A Simulation Approach
NASA Technical Reports Server (NTRS)
Mahfouz, Amr; Arisha, Amr
2011-01-01
Satisfying customers by delivering demands at agreed time, with competitive prices, and in satisfactory quality level are crucial requirements for supply chain survival. Incidence of risks in supply chain often causes sudden disruptions in the processes and consequently leads to customers losing their trust in a company's competence. Rush orders are considered to be one of the main types of supply chain risks due to their negative impact on the overall performance, Using integrated definition modeling approaches (i.e. IDEF0 & IDEF3) and simulation modeling technique, a comprehensive integrated model has been developed to assess rush order risks and examine two risk mitigation strategies. Detailed functions sequence and objects flow were conceptually modeled to reflect on macro and micro levels of the studied supply chain. Discrete event simulation models were then developed to assess and investigate the mitigation strategies of rush order risks, the objective of this is to minimize order cycle time and cost.
Multiensemble Markov models of molecular thermodynamics and kinetics
Wu, Hao; Paul, Fabian; Noé, Frank
2016-01-01
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models—clustering of high-dimensional spaces and modeling of complex many-state systems—with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein–ligand binding model. PMID:27226302
Multiensemble Markov models of molecular thermodynamics and kinetics.
Wu, Hao; Paul, Fabian; Wehmeyer, Christoph; Noé, Frank
2016-06-01
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models-clustering of high-dimensional spaces and modeling of complex many-state systems-with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein-ligand binding model. PMID:27226302
Yao, Weiguang; Leszczynski, Konrad W
2009-07-01
Recently, the authors proposed an analytical scheme to estimate the first order x-ray scatter by approximating the Klein-Nishina formula so that the first order scatter fluence is expressed as a function of the primary photon fluence on the detector. In this work, the authors apply the scheme to experimentally obtained 6 MV cone beam CT projections in which the primary photon fluence is the unknown of interest. With the assumption that the higher-order scatter fluence is either constant or proportional to the first order scatter fluence, an iterative approach is proposed to estimate both primary and scatter fluences from projections by utilizing their relationship. The iterative approach is evaluated by comparisons with experimentally measured scatter-primary ratios of a Catphan phantom and with Monte Carlo simulations of virtual phantoms. The convergence of the iterations is fast and the accuracy of scatter correction is high. For a sufficiently long cylindrical water phantom with 10 cm of radius, the relative error of estimated primary photon fluence was within +/- 2% and +/- 4% when the phantom was projected with 6 MV and 120 kVp x-ray imaging systems, respectively. In addition, the iterative approach for scatter estimation is applied to 6 MV x-ray projections of a QUASAR and anthropomorphic phantoms (head and pelvis). The scatter correction is demonstrated to significantly improve the accuracy of the reconstructed linear attenuation coefficient and the contrast of the projections and reconstructed volumetric images generated with a linac 6 MV beam. PMID:19673214
Symbolic Heuristic Search for Factored Markov Decision Processes
NASA Technical Reports Server (NTRS)
Morris, Robert (Technical Monitor); Feng, Zheng-Zhu; Hansen, Eric A.
2003-01-01
We describe a planning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. State abstraction is used to avoid evaluating states individually. Forward search from a start state, guided by an admissible heuristic, is used to avoid evaluating all states. We combine these two approaches in a novel way that exploits symbolic model-checking techniques and demonstrates their usefulness for decision-theoretic planning.
Markov Tracking for Agent Coordination
NASA Technical Reports Server (NTRS)
Washington, Richard; Lau, Sonie (Technical Monitor)
1998-01-01
Partially observable Markov decision processes (POMDPs) axe an attractive representation for representing agent behavior, since they capture uncertainty in both the agent's state and its actions. However, finding an optimal policy for POMDPs in general is computationally difficult. In this paper we present Markov Tracking, a restricted problem of coordinating actions with an agent or process represented as a POMDP Because the actions coordinate with the agent rather than influence its behavior, the optimal solution to this problem can be computed locally and quickly. We also demonstrate the use of the technique on sequential POMDPs, which can be used to model a behavior that follows a linear, acyclic trajectory through a series of states. By imposing a "windowing" restriction that restricts the number of possible alternatives considered at any moment to a fixed size, a coordinating action can be calculated in constant time, making this amenable to coordination with complex agents.
Orbiting binary black hole evolutions with a multipatch high order finite-difference approach
Pazos, Enrique; Tiglio, Manuel; Duez, Matthew D.; Kidder, Lawrence E.; Teukolsky, Saul A.
2009-07-15
We present numerical simulations of orbiting black holes for around 12 cycles, using a high order multipatch approach. Unlike some other approaches, the computational speed scales almost perfectly for thousands of processors. Multipatch methods are an alternative to adaptive mesh refinement, with benefits of simplicity and better scaling for improving the resolution in the wave zone. The results presented here pave the way for multipatch evolutions of black hole-neutron star and neutron star-neutron star binaries, where high resolution grids are needed to resolve details of the matter flow.
Mapping eQTL Networks with Mixed Graphical Markov Models
Tur, Inma; Roverato, Alberto; Castelo, Robert
2014-01-01
Expression quantitative trait loci (eQTL) mapping constitutes a challenging problem due to, among other reasons, the high-dimensional multivariate nature of gene-expression traits. Next to the expression heterogeneity produced by confounding factors and other sources of unwanted variation, indirect effects spread throughout genes as a result of genetic, molecular, and environmental perturbations. From a multivariate perspective one would like to adjust for the effect of all of these factors to end up with a network of direct associations connecting the path from genotype to phenotype. In this article we approach this challenge with mixed graphical Markov models, higher-order conditional independences, and q-order correlation graphs. These models show that additive genetic effects propagate through the network as function of gene–gene correlations. Our estimation of the eQTL network underlying a well-studied yeast data set leads to a sparse structure with more direct genetic and regulatory associations that enable a straightforward comparison of the genetic control of gene expression across chromosomes. Interestingly, it also reveals that eQTLs explain most of the expression variability of network hub genes. PMID:25271303
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
Bayesian seismic tomography by parallel interacting Markov chains
NASA Astrophysics Data System (ADS)
Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas
2014-05-01
The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the
NASA Astrophysics Data System (ADS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2016-09-01
We propose arbitrary high-order discontinuous Galerkin (DG) schemes that are designed based on a first-order hyperbolic advection-diffusion formulation of the target governing equations. We present, in details, the efficient construction of the proposed high-order schemes (called DG-H), and show that these schemes have the same number of global degrees-of-freedom as comparable conventional high-order DG schemes, produce the same or higher order of accuracy solutions and solution gradients, are exact for exact polynomial functions, and do not need a second-derivative diffusion operator. We demonstrate that the constructed high-order schemes give excellent quality solution and solution gradients on irregular triangular elements. We also construct a Weighted Essentially Non-Oscillatory (WENO) limiter for the proposed DG-H schemes and apply it to discontinuous problems. We also make some accuracy comparisons with conventional DG and interior penalty schemes. A relative qualitative cost analysis is also reported, which indicates that the high-order schemes produce orders of magnitude more accurate results than the low-order schemes for a given CPU time. Furthermore, we show that the proposed DG-H schemes are nearly as efficient as the DG and Interior-Penalty (IP) schemes as these schemes produce results that are relatively at the same error level for approximately a similar CPU time.
Lagrange-Function Approach to Real-Space Order-N Electronic-Structure Calculations
Varga, Kalman; Pantelides, Sokrates T
2006-01-01
The Lagrange functions are a family of analytical, complete, and orthonormal basis sets that are suitable for efficient, accurate, real-space, order-N electronic-structure calculations. Convergence is controlled by a single monotonic parameter, the dimension of the basis set, and computational complexity is lower than that of conventional approaches. In this paper we review their construction and applications in linearscaling electronic-structure calculations.
Markov Models and the Ensemble Kalman Filter for Estimation of Sorption Rates
NASA Astrophysics Data System (ADS)
Vugrin, E. D.; McKenna, S. A.; White Vugrin, K.
2007-12-01
Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000. This work was supported under the Sandia Laboratory Directed Research and Development program.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. PMID:25761965
Mixture Hidden Markov Models in Finance Research
NASA Astrophysics Data System (ADS)
Dias, José G.; Vermunt, Jeroen K.; Ramos, Sofia
Finite mixture models have proven to be a powerful framework whenever unobserved heterogeneity cannot be ignored. We introduce in finance research the Mixture Hidden Markov Model (MHMM) that takes into account time and space heterogeneity simultaneously. This approach is flexible in the sense that it can deal with the specific features of financial time series data, such as asymmetry, kurtosis, and unobserved heterogeneity. This methodology is applied to model simultaneously 12 time series of Asian stock markets indexes. Because we selected a heterogeneous sample of countries including both developed and emerging countries, we expect that heterogeneity in market returns due to country idiosyncrasies will show up in the results. The best fitting model was the one with two clusters at country level with different dynamics between the two regimes.
Plume mapping via hidden Markov methods.
Farrell, J A; Pang, Shuo; Li, Wei
2003-01-01
This paper addresses the problem of mapping likely locations of a chemical source using an autonomous vehicle operating in a fluid flow. The paper reviews biological plume-tracing concepts, reviews previous strategies for vehicle-based plume tracing, and presents a new plume mapping approach based on hidden Markov methods (HMM). HMM provide efficient algorithms for predicting the likelihood of odor detection versus position, the likelihood of source location versus position, the most likely path taken by the odor to a given location, and the path between two points most likely to result in odor detection. All four are useful for solving the odor source localization problem using an autonomous vehicle. The vehicle is assumed to be capable of detecting above threshold chemical concentration and sensing the fluid flow velocity at the vehicle location. The fluid flow is assumed to vary with space and time, and to have a high Reynolds number (Re>10). PMID:18238238
Multiple alignment using hidden Markov models
Eddy, S.R.
1995-12-31
A simulated annealing method is described for training hidden Markov models and producing multiple sequence alignments from initially unaligned protein or DNA sequences. Simulated annealing in turn uses a dynamic programming algorithm for correctly sampling suboptimal multiple alignments according to their probability and a Boltzmann temperature factor. The quality of simulated annealing alignments is evaluated on structural alignments of ten different protein families, and compared to the performance of other HMM training methods and the ClustalW program. Simulated annealing is better able to find near-global optima in the multiple alignment probability landscape than the other tested HMM training methods. Neither ClustalW nor simulated annealing produce consistently better alignments compared to each other. Examination of the specific cases in which ClustalW outperforms simulated annealing, and vice versa, provides insight into the strengths and weaknesses of current hidden Maxkov model approaches.
Markov state models of biomolecular conformational dynamics
Chodera, John D.; Noé, Frank
2014-01-01
It has recently become practical to construct Markov state models (MSMs) that reproduce the long-time statistical conformational dynamics of biomolecules using data from molecular dynamics simulations. MSMs can predict both stationary and kinetic quantities on long timescales (e.g. milliseconds) using a set of atomistic molecular dynamics simulations that are individually much shorter, thus addressing the well-known sampling problem in molecular dynamics simulation. In addition to providing predictive quantitative models, MSMs greatly facilitate both the extraction of insight into biomolecular mechanism (such as folding and functional dynamics) and quantitative comparison with single-molecule and ensemble kinetics experiments. A variety of methodological advances and software packages now bring the construction of these models closer to routine practice. Here, we review recent progress in this field, considering theoretical and methodological advances, new software tools, and recent applications of these approaches in several domains of biochemistry and biophysics, commenting on remaining challenges. PMID:24836551
Growth and Dissolution of Macromolecular Markov Chains
NASA Astrophysics Data System (ADS)
Gaspard, Pierre
2016-07-01
The kinetics and thermodynamics of free living copolymerization are studied for processes with rates depending on k monomeric units of the macromolecular chain behind the unit that is attached or detached. In this case, the sequence of monomeric units in the growing copolymer is a kth-order Markov chain. In the regime of steady growth, the statistical properties of the sequence are determined analytically in terms of the attachment and detachment rates. In this way, the mean growth velocity as well as the thermodynamic entropy production and the sequence disorder can be calculated systematically. These different properties are also investigated in the regime of depolymerization where the macromolecular chain is dissolved by the surrounding solution. In this regime, the entropy production is shown to satisfy Landauer's principle.
Behavior Detection using Confidence Intervals of Hidden Markov Models
Griffin, Christopher H
2009-01-01
Markov models are commonly used to analyze real-world problems. Their combination of discrete states and stochastic transitions is suited to applications with deterministic and stochastic components. Hidden Markov Models (HMMs) are a class of Markov model commonly used in pattern recognition. Currently, HMMs recognize patterns using a maximum likelihood approach. One major drawback with this approach is that data observations are mapped to HMMs without considering the number of data samples available. Another problem is that this approach is only useful for choosing between HMMs. It does not provide a criteria for determining whether or not a given HMM adequately matches the data stream. In this work, we recognize complex behaviors using HMMs and confidence intervals. The certainty of a data match increases with the number of data samples considered. Receiver Operating Characteristic curves are used to find the optimal threshold for either accepting or rejecting a HMM description. We present one example using a family of HMM's to show the utility of the proposed approach. A second example using models extracted from a database of consumer purchases provides additional evidence that this approach can perform better than existing techniques.
Fuzzy Markov random fields versus chains for multispectral image segmentation.
Salzenstein, Fabien; Collet, Christophe
2006-11-01
This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data. PMID:17063681
Operations and support cost modeling using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit
1989-01-01
Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.
Comparing quantum versus Markov random walk models of judgements measured by rating scales
Wang, Z.; Busemeyer, J. R.
2016-01-01
Quantum and Markov random walk models are proposed for describing how people evaluate stimuli using rating scales. To empirically test these competing models, we conducted an experiment in which participants judged the effectiveness of public health service announcements from either their own personal perspective or from the perspective of another person. The order of the self versus other judgements was manipulated, which produced significant sequential effects. The quantum and Markov models were fitted to the data using the same number of parameters, and the model comparison strongly supported the quantum over the Markov model. PMID:26621984
COCIS: Markov processes in single molecule fluorescence
Talaga, David S.
2009-01-01
This article examines the current status of Markov processes in single molecule fluorescence. For molecular dynamics to be described by a Markov process, the Markov process must include all states involved in the dynamics and the FPT distributions out of those states must be describable by a simple exponential law. The observation of non-exponential first-passage time distributions or other evidence of non-Markovian dynamics is common in single molecule studies and offers an opportunity to expand the Markov model to include new dynamics or states that improve understanding of the system. PMID:19543444
A compositional framework for Markov processes
NASA Astrophysics Data System (ADS)
Baez, John C.; Fong, Brendan; Pollard, Blake S.
2016-03-01
We define the concept of an "open" Markov process, or more precisely, continuous-time Markov chain, which is one where probability can flow in or out of certain states called "inputs" and "outputs." One can build up a Markov process from smaller open pieces. This process is formalized by making open Markov processes into the morphisms of a dagger compact category. We show that the behavior of a detailed balanced open Markov process is determined by a principle of minimum dissipation, closely related to Prigogine's principle of minimum entropy production. Using this fact, we set up a functor mapping open detailed balanced Markov processes to open circuits made of linear resistors. We also describe how to "black box" an open Markov process, obtaining the linear relation between input and output data that holds in any steady state, including nonequilibrium steady states with a nonzero flow of probability through the system. We prove that black boxing gives a symmetric monoidal dagger functor sending open detailed balanced Markov processes to Lagrangian relations between symplectic vector spaces. This allows us to compute the steady state behavior of an open detailed balanced Markov process from the behaviors of smaller pieces from which it is built. We relate this black box functor to a previously constructed black box functor for circuits.
Bayesian Markov models consistently outperform PWMs at predicting motifs in nucleotide sequences.
Siebert, Matthias; Söding, Johannes
2016-07-27
Position weight matrices (PWMs) are the standard model for DNA and RNA regulatory motifs. In PWMs nucleotide probabilities are independent of nucleotides at other positions. Models that account for dependencies need many parameters and are prone to overfitting. We have developed a Bayesian approach for motif discovery using Markov models in which conditional probabilities of order k - 1 act as priors for those of order k This Bayesian Markov model (BaMM) training automatically adapts model complexity to the amount of available data. We also derive an EM algorithm for de-novo discovery of enriched motifs. For transcription factor binding, BaMMs achieve significantly (P = 1/16) higher cross-validated partial AUC than PWMs in 97% of 446 ChIP-seq ENCODE datasets and improve performance by 36% on average. BaMMs also learn complex multipartite motifs, improving predictions of transcription start sites, polyadenylation sites, bacterial pause sites, and RNA binding sites by 26-101%. BaMMs never performed worse than PWMs. These robust improvements argue in favour of generally replacing PWMs by BaMMs. PMID:27288444
A New Approach for Constructing Highly Stable High Order CESE Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2010-01-01
A new approach is devised to construct high order CESE schemes which would avoid the common shortcomings of traditional high order schemes including: (a) susceptibility to computational instabilities; (b) computational inefficiency due to their local implicit nature (i.e., at each mesh points, need to solve a system of linear/nonlinear equations involving all the mesh variables associated with this mesh point); (c) use of large and elaborate stencils which complicates boundary treatments and also makes efficient parallel computing much harder; (d) difficulties in applications involving complex geometries; and (e) use of problem-specific techniques which are needed to overcome stability problems but often cause undesirable side effects. In fact it will be shown that, with the aid of a conceptual leap, one can build from a given 2nd-order CESE scheme its 4th-, 6th-, 8th-,... order versions which have the same stencil and same stability conditions of the 2nd-order scheme, and also retain all other advantages of the latter scheme. A sketch of multidimensional extensions will also be provided.
High-order fully general-relativistic hydrodynamics: new approaches and tests
NASA Astrophysics Data System (ADS)
Radice, David; Rezzolla, Luciano; Galeazzi, Filippo
2014-04-01
We present a new approach for achieving high-order convergence in fully general-relativistic hydrodynamic simulations. The approach is implemented in WhiskyTHC, a new code that makes use of state-of-the-art numerical schemes and was key in achieving, for the first time, higher than second-order convergence in the calculation of the gravitational radiation from inspiraling binary neutron stars (Radice et al 2014 Mon. Not. R. Astron. Soc. 437 L46-L50). Here, we give a detailed description of the algorithms employed and present results obtained for a series of classical tests involving isolated neutron stars. In addition, using the gravitational-wave emission from the late-inspiral and merger of binary neutron stars, we make a detailed comparison between the results obtained with the new code and those obtained when using standard second-order schemes commonly employed for matter simulations in numerical relativity. We find that even at moderate resolutions and for binaries with large compactness, the phase accuracy is improved by a factor 50 or more.
Markov constant and quantum instabilities
NASA Astrophysics Data System (ADS)
Pelantová, Edita; Starosta, Štěpán; Znojil, Miloslav
2016-04-01
For a qualitative analysis of spectra of certain two-dimensional rectangular-well quantum systems several rigorous methods of number theory are shown productive and useful. These methods (and, in particular, a generalization of the concept of Markov constant known in Diophantine approximation theory) are shown to provide a new mathematical insight in the phenomenologically relevant occurrence of anomalies in the spectra. Our results may inspire methodical innovations ranging from the description of the stability properties of metamaterials and of certain hiddenly unitary quantum evolution models up to the clarification of the mechanisms of occurrence of ghosts in quantum cosmology.
Adaptive relaxation for the steady-state analysis of Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham
1994-01-01
We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.
Approach Detect Sensor System by Second Order Derivative of Laser Irradiation Area
NASA Astrophysics Data System (ADS)
Hayashi, Tomohide; Yano, Yoshikazu; Tsuda, Norio; Yamada, Jun
In recent years, as a result of a large amount of greenhouse gas emission, atmosphere temperature at ground level gradually rises. Therefore the Kyoto Protocol was adopted to solve the matter in 1997. By the energy-saving law amended in 1999, it is advisable that an escalator is controlled to pause during no user. Now a photo-electric sensor is used to control escalator, but a pole to install the sensor is needed. Then, a new type of approach detection sensor using laser diode, CCD camera and CPLD, which can be built-in escalator, has been studied. This sensor can derive the irradiated area of laser beam by simple processing in which the laser beam is irradiated in only the odd field of the interlace video signal. By second order derivative of laser irradiated area, this sensor can detect only the approaching target but can not detect the target which crosses and stands in the sensing area.
A preference-ordered discrete-gaming approach to air-combat analysis
NASA Technical Reports Server (NTRS)
Kelley, H. J.; Lefton, L.
1978-01-01
An approach to one-on-one air-combat analysis is described which employs discrete gaming of a parameterized model featuring choice between several closed-loop control policies. A preference-ordering formulation due to Falco is applied to rational choice between outcomes: win, loss, mutual capture, purposeful disengagement, draw. Approximate optimization is provided by an active-cell scheme similar to Falco's obtained by a 'backing up' process similar to that of Kopp. The approach is designed primarily for short-duration duels between craft with large-envelope weaponry. Some illustrative computations are presented for an example modeled using constant-speed vehicles and very rough estimation of energy shifts.
The Core-Shell Approach to Formation of Ordered Nanoporous Materials
Chang, Jeong H.; Wang, Li Q.; Shin, Yongsoon; Jeong, Byeongmoon; Birnbaum, Jerome C.; Exarhos, Gregory J.
2002-03-04
This work describes a novel core-shell approach for the preparation of ordered nanoporous ceramic materials that involve a self-assembly process at the molecular level using MPEG-b-PDLLA bloack copolymers. This approach provides for rapid self-assembly and structural reorganization at room temperature. Selected MPEG-b-PDLLA block copolymers were synthesized with systematic variation of the chain lengths of the resident hydrophilic and hydrophobic blocks. This allows the micelle size to be systematically varied. Results from this work are used to understand the formation mechanism of nanoporous structures in which the pore size and wall thickness are closely dependent on the size of hydrophobic cores and hydrophilic shells of the block copolymer templates. The core-shell mechanism for nanoporous structure evolution is based on the size and contrasting micellar packing arrangements that are controlled by the copolymer.
Optimal Control of Markov Processes with Age-Dependent Transition Rates
Ghosh, Mrinal K. Saha, Subhamay
2012-10-15
We study optimal control of Markov processes with age-dependent transition rates. The control policy is chosen continuously over time based on the state of the process and its age. We study infinite horizon discounted cost and infinite horizon average cost problems. Our approach is via the construction of an equivalent semi-Markov decision process. We characterise the value function and optimal controls for both discounted and average cost cases.
Mohanasubha, R.; Chandrasekar, V. K.; Senthilvelan, M.; Lakshmanan, M.
2015-01-01
We unearth the interconnection between various analytical methods which are widely used in the current literature to identify integrable nonlinear dynamical systems described by third-order nonlinear ODEs. We establish an important interconnection between the extended Prelle–Singer procedure and λ-symmetries approach applicable to third-order ODEs to bring out the various linkages associated with these different techniques. By establishing this interconnection we demonstrate that given any one of the quantities as a starting point in the family consisting of Jacobi last multipliers, Darboux polynomials, Lie point symmetries, adjoint-symmetries, λ-symmetries, integrating factors and null forms one can derive the rest of the quantities in this family in a straightforward and unambiguous manner. We also illustrate our findings with three specific examples.
Modeling two-vehicle crash severity by a bivariate generalized ordered probit approach.
Chiou, Yu-Chiun; Hwang, Cherng-Chwan; Chang, Chih-Chin; Fu, Chiang
2013-03-01
This study simultaneously models crash severity of both parties in two-vehicle accidents at signalized intersections in Taipei City, Taiwan, using a novel bivariate generalized ordered probit (BGOP) model. Estimation results show that the BGOP model performs better than the conventional bivariate ordered probit (BOP) model in terms of goodness-of-fit indices and prediction accuracy and provides a better approach to identify the factors contributing to different severity levels. According to estimated parameters in latent propensity functions and elasticity effects, several key risk factors are identified-driver type (age>65), vehicle type (motorcycle), violation type (alcohol use), intersection type (three-leg and multiple-leg), collision type (rear ended), and lighting conditions (night and night without illumination). Corresponding countermeasures for these risk factors are proposed. PMID:23246710
Reprint of "Modeling two-vehicle crash severity by a bivariate generalized ordered probit approach".
Chiou, Yu-Chiun; Hwang, Cherng-Chwan; Chang, Chih-Chin; Fu, Chiang
2013-12-01
This study simultaneously models crash severity of both parties in two-vehicle accidents at signalized intersections in Taipei City, Taiwan, using a novel bivariate generalized ordered probit (BGOP) model. Estimation results show that the BGOP model performs better than the conventional bivariate ordered probit (BOP) model in terms of goodness-of-fit indices and prediction accuracy and provides a better approach to identify the factors contributing to different severity levels. According to estimated parameters in latent propensity functions and elasticity effects, several key risk factors are identified-driver type (age>65), vehicle type (motorcycle), violation type (alcohol use), intersection type (three-leg and multiple-leg), collision type (rear ended), and lighting conditions (night and night without illumination). Corresponding countermeasures for these risk factors are proposed. PMID:23915470
A general approach to develop reduced order models for simulation of solid oxide fuel cell stacks
Pan, Wenxiao; Bao, Jie; Lo, Chaomei; Lai, Canhai; Agarwal, Khushbu; Koeppel, Brian J.; Khaleel, Mohammad A.
2013-06-15
A reduced order modeling approach based on response surface techniques was developed for solid oxide fuel cell stacks. This approach creates a numerical model that can quickly compute desired performance variables of interest for a stack based on its input parameter set. The approach carefully samples the multidimensional design space based on the input parameter ranges, evaluates a detailed stack model at each of the sampled points, and performs regression for selected performance variables of interest to determine the responsive surfaces. After error analysis to ensure that sufficient accuracy is established for the response surfaces, they are then implemented in a calculator module for system-level studies. The benefit of this modeling approach is that it is sufficiently fast for integration with system modeling software and simulation of fuel cell-based power systems while still providing high fidelity information about the internal distributions of key variables. This paper describes the sampling, regression, sensitivity, error, and principal component analyses to identify the applicable methods for simulating a planar fuel cell stack.
Arbitrary Lagrangian-Eulerian approach in reduced order modeling of a flow with a moving boundary
NASA Astrophysics Data System (ADS)
Stankiewicz, W.; Roszak, R.; Morzyński, M.
2013-06-01
Flow-induced deflections of aircraft structures result in oscillations that might turn into such a dangerous phenomena like flutter or buffeting. In this paper the design of an aeroelastic system consisting of Reduced Order Model (ROM) of the flow with a moving boundary is presented. The model is based on Galerkin projection of governing equation onto space spanned by modes obtained from high-fidelity computations. The motion of the boundary and mesh is defined in Arbitrary Lagrangian-Eulerian (ALE) approach and results in additional convective term in Galerkin system. The developed system is demonstrated on the example of a flow around an oscillating wing.
Process-chain approach to high-order perturbation calculus for quantum lattice models
Eckardt, Andre
2009-05-15
A method based on Rayleigh-Schroedinger perturbation theory is developed that allows to obtain high-order series expansions for ground-state properties of quantum lattice models. The approach is capable of treating both lattice geometries of large spatial dimensionalities d and on-site degrees of freedom with large state space dimensionalities. It has recently been used to accurately compute the zero-temperature phase diagram of the Bose-Hubbard model on a hypercubic lattice, up to arbitrary large filling and for d=2, 3, and greater [Teichmann et al., Phys. Rev. B 79, 100503(R) (2009)].0.
A new approach to the higher order superintegrability of the Tremblay-Turbiner-Winternitz system
NASA Astrophysics Data System (ADS)
Rañada, Manuel F.
2012-11-01
The higher order superintegrability of systems separable in polar coordinates is studied using an approach that was previously applied for the study of the superintegrability of a generalized Smorodinsky-Winternitz system. The idea is that the additional constant of motion can be factorized as the product of powers of two particular rather simple complex functions (here denoted by M and N). This technique leads to a proof of the superintegrability of the Tremblay-Turbiner-Winternitz system and to the explicit expression of the constants of motion. A second family (related with the first one) of superintegrable systems is also studied.
An approach for generating the first order structure of multi-movable zoom lens
NASA Astrophysics Data System (ADS)
Song, Qiang; Huang, Hao; Lv, Xiangbo; Zhu, Jing; Huang, Huijie
2016-01-01
This work provides a method to obtain the first order structure of a zoom system based on particle swarm optimization (PSO) algorithm. The kinematic rule of a zoom system with fixed image plane is described by differential equations. PSO algorithm is introduced to solve the differential equations with considering both the merit functions and the boundary constraint. The smooth of the kinematic function of the zoom system is checked for considering the fabrication feasibility. Examples with two types of zoom system are presented for verifying the proposed method. This approach provides a powerful and practical tool for construction of a zoom structure.
Action approach to cosmological perturbations: the second-order metric in matter dominance
Boubekeur, Lotfi; Creminelli, Paolo; Vernizzi, Filippo; Norena, Jorge
2008-08-15
We study nonlinear cosmological perturbations during post-inflationary evolution, using the equivalence between a perfect barotropic fluid and a derivatively coupled scalar field with Lagrangian [-({partial_derivative}{phi}){sup 2}]{sup (1+w)/2w}. Since this Lagrangian is just a special case of k-inflation, this approach is analogous to the one employed in the study of non-Gaussianities from inflation. We use this method to derive the second-order metric during matter dominance in the comoving gauge directly as a function of the primordial inflationary perturbation {zeta}. Going to Poisson gauge, we recover the metric previously derived in the literature.
A second order residual based predictor-corrector approach for time dependent pollutant transport
NASA Astrophysics Data System (ADS)
Pavan, S.; Hervouet, J.-M.; Ricchiuto, M.; Ata, R.
2016-08-01
We present a second order residual distribution scheme for scalar transport problems in shallow water flows. The scheme, suitable for the unsteady cases, is obtained adapting to the shallow water context the explicit Runge-Kutta schemes for scalar equations [1]. The resulting scheme is decoupled from the hydrodynamics yet the continuity equation has to be considered in order to respect some important numerical properties at discrete level. Beyond the classical characteristics of the residual formulation presented in [1,2], we introduce the possibility to iterate the corrector step in order to improve the accuracy of the scheme. Another novelty is that the scheme is based on a precise monotonicity condition which guarantees the respect of the maximum principle. We thus end up with a scheme which is mass conservative, second order accurate and monotone. These properties are checked in the numerical tests, where the proposed approach is also compared to some finite volume schemes on unstructured grids. The results obtained show the interest in adopting the predictor-corrector scheme for pollutant transport applications, where conservation of the mass, monotonicity and accuracy are the most relevant concerns.
NASA Astrophysics Data System (ADS)
Maity, R.; Prasad, D.
2011-01-01
In this paper, Split Markov Process (SMP) is developed to assess one-step-ahead variation of daily rainfall at a rain gauge station. SMP is an advancement of general Markov Process (MP) and specially developed for probabilistic assessment of change in daily rainfall magnitude. The approach is based on a first-order Markov chain to simulate daily rainfall variation at a point through state/sub-state Transitional Probability Matrix (TPM). The state/sub-state TPM is based on the historical transitions from a particular state to a particular sub-state, which is the basic difference between SMP and general MP. In MP, the transition from a particular state to another state is investigated. However, in SMP, the daily rainfall magnitude is categorized into different states and change in magnitude from one temporal step to another is categorized into different sub-states for the probabilistic assessment of rainfall variation. The cumulative state/sub-state TPM is represented in a contour plot at different probability levels. The developed cumulative state/sub-state TPM is used to assess the possible range of rainfall in next time step, in a probabilistic sense. Application of SMP is investigated for daily rainfall at Khandwa station in the Nimar district of Madhya Pradesh, India. Eighty years of daily monsoon rainfall is used to develop the state/sub-state TPM and twenty years data is used to investigate its performance. It is observed that the predicted range of daily rainfall captures the actual observed rainfall with few exceptions. Overall, the assessed range, particularly the upper limit, provides a quantification possible extreme value in the next time step, which is very useful information to tackle the extreme events, such flooding, water logging etc.
An abstract specification language for Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, R. W.
1985-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
The sharp constant in Markov's inequality for the Laguerre weight
Sklyarov, Vyacheslav P
2009-06-30
We prove that the polynomial of degree n that deviates least from zero in the uniformly weighted metric with Laguerre weight is the extremal polynomial in Markov's inequality for the norm of the kth derivative. Moreover, the corresponding sharp constant does not exceed (8{sup k} n {exclamation_point} k {exclamation_point})/((n-k){exclamation_point} (2k){exclamation_point}). For the derivative of a fixed order this bound is asymptotically sharp as n{yields}{infinity}. Bibliography: 20 items.
Using Games to Teach Markov Chains
ERIC Educational Resources Information Center
Johnson, Roger W.
2003-01-01
Games are promoted as examples for classroom discussion of stationary Markov chains. In a game context Markov chain terminology and results are made concrete, interesting, and entertaining. Game length for several-player games such as "Hi Ho! Cherry-O" and "Chutes and Ladders" is investigated and new, simple formulas are given. Slight…
Building Simple Hidden Markov Models. Classroom Notes
ERIC Educational Resources Information Center
Ching, Wai-Ki; Ng, Michael K.
2004-01-01
Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.
Generators of quantum Markov semigroups
NASA Astrophysics Data System (ADS)
Androulakis, George; Ziemke, Matthew
2015-08-01
Quantum Markov Semigroups (QMSs) originally arose in the study of the evolutions of irreversible open quantum systems. Mathematically, they are a generalization of classical Markov semigroups where the underlying function space is replaced by a non-commutative operator algebra. In the case when the QMS is uniformly continuous, theorems due to the works of Lindblad [Commun. Math. Phys. 48, 119-130 (1976)], Stinespring [Proc. Am. Math. Soc. 6, 211-216 (1955)], and Kraus [Ann. Phys. 64, 311-335 (1970)] imply that the generator of the semigroup has the form L ( A ) = ∑ n = 1 ∞ Vn ∗ A V n + G A + A G ∗ , where Vn and G are elements of the underlying operator algebra. In the present paper, we investigate the form of the generators of QMSs which are not necessarily uniformly continuous and act on the bounded operators of a Hilbert space. We prove that the generators of such semigroups have forms that reflect the results of Lindblad and Stinespring. We also make some progress towards forms reflecting Kraus' result. Finally, we look at several examples to clarify our findings and verify that some of the unbounded operators we are using have dense domains.
Next-to-leading order gravitational spin-orbit coupling in an effective field theory approach
Levi, Michele
2010-11-15
We use an effective field theory (EFT) approach to calculate the next-to-leading order (NLO) gravitational spin-orbit interaction between two spinning compact objects. The NLO spin-orbit interaction provides the most computationally complex sector of the NLO spin effects, previously derived within the EFT approach. In particular, it requires the inclusion of nonstationary cubic self-gravitational interaction, as well as the implementation of a spin supplementary condition (SSC) at higher orders. The EFT calculation is carried out in terms of the nonrelativistic gravitational field parametrization, making the calculation more efficient with no need to rely on automated computations, and illustrating the coupling hierarchy of the different gravitational field components to the spin and mass sources. Finally, we show explicitly how to relate the EFT derived spin results to the canonical results obtained with the Arnowitt-Deser-Misner (ADM) Hamiltonian formalism. This is done using noncanonical transformations, required due to the implementation of covariant SSC, as well as canonical transformations at the level of the Hamiltonian, with no need to resort to the equations of motion or the Dirac brackets.
A unidirectional approach for d-dimensional finite element methods for higher order on sparse grids
Bungartz, H.J.
1996-12-31
In the last years, sparse grids have turned out to be a very interesting approach for the efficient iterative numerical solution of elliptic boundary value problems. In comparison to standard (full grid) discretization schemes, the number of grid points can be reduced significantly from O(N{sup d}) to O(N(log{sub 2}(N)){sup d-1}) in the d-dimensional case, whereas the accuracy of the approximation to the finite element solution is only slightly deteriorated: For piecewise d-linear basis functions, e. g., an accuracy of the order O(N{sup - 2}(log{sub 2}(N)){sup d-1}) with respect to the L{sub 2}-norm and of the order O(N{sup -1}) with respect to the energy norm has been shown. Furthermore, regular sparse grids can be extended in a very simple and natural manner to adaptive ones, which makes the hierarchical sparse grid concept applicable to problems that require adaptive grid refinement, too. An approach is presented for the Laplacian on a uinit domain in this paper.
On the entropy of a hidden Markov process⋆
Jacquet, Philippe; Seroussi, Gadiel; Szpankowski, Wojciech
2008-01-01
We study the entropy rate of a hidden Markov process (HMP) defined by observing the output of a binary symmetric channel whose input is a first-order binary Markov process. Despite the simplicity of the models involved, the characterization of this entropy is a long standing open problem. By presenting the probability of a sequence under the model as a product of random matrices, one can see that the entropy rate sought is equal to a top Lyapunov exponent of the product. This offers an explanation for the elusiveness of explicit expressions for the HMP entropy rate, as Lyapunov exponents are notoriously difficult to compute. Consequently, we focus on asymptotic estimates, and apply the same product of random matrices to derive an explicit expression for a Taylor approximation of the entropy rate with respect to the parameter of the binary symmetric channel. The accuracy of the approximation is validated against empirical simulation results. We also extend our results to higher-order Markov processes and to Rényi entropies of any order. PMID:19169438
Testing the Markov hypothesis in fluid flows
NASA Astrophysics Data System (ADS)
Meyer, Daniel W.; Saggini, Frédéric
2016-05-01
Stochastic Markov processes are used very frequently to model, for example, processes in turbulence and subsurface flow and transport. Based on the weak Chapman-Kolmogorov equation and the strong Markov condition, we present methods to test the Markov hypothesis that is at the heart of these models. We demonstrate the capabilities of our methodology by testing the Markov hypothesis for fluid and inertial particles in turbulence, and fluid particles in the heterogeneous subsurface. In the context of subsurface macrodispersion, we find that depending on the heterogeneity level, Markov models work well above a certain scale of interest for media with different log-conductivity correlation structures. Moreover, we find surprising similarities in the velocity dynamics of the different media considered.
Approach to the glass transition studied by higher order correlation functions
NASA Astrophysics Data System (ADS)
Lacevic, N.; Glotzer, S. C.
2003-08-01
We present a theoretical framework based on a higher order density correlation function, analogous to that used to investigate spin glasses, to describe dynamical heterogeneities in simulated glass-forming liquids. These higher order correlation functions are a four-point, time-dependent density correlation function g4(r,t) and a corresponding 'structure factor' S4(q,t) which measure the spatial correlations between the local liquid density at two points in space, each at two different times. g4(r,t) and S4(q,t) were extensively studied via molecular dynamics simulations of a binary Lennard-Jones mixture approaching the mode coupling temperature from above in Franz et al (1999 Phil. Mag. B 79 1827), Donati et al (2002 J. Non-Cryst. Solids 307 215), Glotzer et al (2000 J. Chem. Phys. 112 509), Lacevic et al (2002 Phys. Rev. E 66 030101), Lacevic et al (2003 J. Chem. Phys. submitted) and Lacevic (2003 Dissertation The Johns Hopkins University). Here, we examine the contribution to g4(r,t), S4(q,t) and the corresponding dynamical correlation length, as well as the corresponding order parameter Q(t) and generalized susceptibility chi4(t), from localized particles. We show that the dynamical correlation length xi4SS(t) of localized particles has a maximum as a function of time t, and the value of the maximum of xi4SS(t) increases steadily in the temperature range approaching the mode coupling temperature from above.
Hidden Markov models for threat prediction fusion
NASA Astrophysics Data System (ADS)
Ross, Kenneth N.; Chaney, Ronald D.
2000-04-01
This work addresses the often neglected, but important problem of Level 3 fusion or threat refinement. This paper describes algorithms for threat prediction and test results from a prototype threat prediction fusion engine. The threat prediction fusion engine selectively models important aspects of the battlespace state using probability-based methods and information obtained from lower level fusion engines. Our approach uses hidden Markov models of a hierarchical threat state to find the most likely Course of Action (CoA) for the opposing forces. Decision tress use features derived from the CoA probabilities and other information to estimate the level of threat presented by the opposing forces. This approach provides the user with several measures associated with the level of threat, including: probability that the enemy is following a particular CoA, potential threat presented by the opposing forces, and likely time of the threat. The hierarchical approach used for modeling helps us efficiently represent the battlespace with a structure that permits scaling the models to larger scenarios without adding prohibitive computational costs or sacrificing model fidelity.
Podlubny, Igor; Skovranek, Tomas; Vinagre Jara, Blas M; Petras, Ivo; Verbitsky, Viktor; Chen, YangQuan
2013-05-13
In this paper, we further develop Podlubny's matrix approach to discretization of integrals and derivatives of non-integer order. Numerical integration and differentiation on non-equidistant grids is introduced and illustrated by several examples of numerical solution of differential equations with fractional derivatives of constant orders and with distributed-order derivatives. In this paper, for the first time, we present a variable-step-length approach that we call 'the method of large steps', because it is applied in combination with the matrix approach for each 'large step'. This new method is also illustrated by an easy-to-follow example. The presented approach allows fractional-order and distributed-order differentiation and integration of non-uniformly sampled signals, and opens the way to development of variable- and adaptive-step-length techniques for fractional- and distributed-order differential equations. PMID:23547230
Regeneration and Fixed-Width Analysis of Markov Chain Monte Carlo Algorithms
NASA Astrophysics Data System (ADS)
Latuszynski, Krzysztof
2009-07-01
In the thesis we take the split chain approach to analyzing Markov chains and use it to establish fixed-width results for estimators obtained via Markov chain Monte Carlo procedures (MCMC). Theoretical results include necessary and sufficient conditions in terms of regeneration for central limit theorems for ergodic Markov chains and a regenerative proof of a CLT version for uniformly ergodic Markov chains with E_{π}f^2< infty. To obtain asymptotic confidence intervals for MCMC estimators, strongly consistent estimators of the asymptotic variance are essential. We relax assumptions required to obtain such estimators. Moreover, under a drift condition, nonasymptotic fixed-width results for MCMC estimators for a general state space setting (not necessarily compact) and not necessarily bounded target function f are obtained. The last chapter is devoted to the idea of adaptive Monte Carlo simulation and provides convergence results and law of large numbers for adaptive procedures under path-stability condition for transition kernels.
Data-driven Markov models and their application in the evaluation of adverse events in radiotherapy.
Abler, Daniel; Kanellopoulos, Vassiliki; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken
2013-07-01
Decision-making processes in medicine rely increasingly on modelling and simulation techniques; they are especially useful when combining evidence from multiple sources. Markov models are frequently used to synthesize the available evidence for such simulation studies, by describing disease and treatment progress, as well as associated factors such as the treatment's effects on a patient's life and the costs to society. When the same decision problem is investigated by multiple stakeholders, differing modelling assumptions are often applied, making synthesis and interpretation of the results difficult. This paper proposes a standardized approach towards the creation of Markov models. It introduces the notion of 'general Markov models', providing a common definition of the Markov models that underlie many similar decision problems, and develops a language for their specification. We demonstrate the application of this language by developing a general Markov model for adverse event analysis in radiotherapy and argue that the proposed method can automate the creation of Markov models from existing data. The approach has the potential to support the radiotherapy community in conducting systematic analyses involving predictive modelling of existing and upcoming radiotherapy data. We expect it to facilitate the application of modelling techniques in medical decision problems beyond the field of radiotherapy, and to improve the comparability of their results. PMID:23824126
Approach for LIDAR signals with multiple returns.
Yin, Wenye; He, Weiji; Gu, Guohua; Chen, Qian
2014-10-20
Time-correlated single photon counting (TCSPC) and burst illumination laser (BIL) data can be used for depth reconstruction of a target surface; the problem is how to analyze the response for the reconstruction. We propose a fast-approach STMCMC (Simulated Tempering Markov Chain Monte Carlo) for LIDAR signals with multiple return, in order to obtain a complete characterization of a 3D surface by the laser range system. STMCMC is used to explore the spaces by the preset distributions instead of the prior distributions. Added active intervention tempering makes the Markov chain mix better through the temporary expansion of the solutions. The added step keeps the operation under control and yet retains the Markov characteristic of the operation. The theoretical analysis and the demonstrations on the practical data show flexible operation, and the parameters can be estimated to a high degree of accuracy. PMID:25402782
NASA Technical Reports Server (NTRS)
Nese, Jon M.; Dutton, John A.
1993-01-01
The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.
Photoassociation of a cold-atom-molecule pair. II. Second-order perturbation approach
Lepers, M.; Vexiau, R.; Bouloufa, N.; Dulieu, O.; Kokoouline, V.
2011-04-15
The electrostatic interaction between an excited atom and a diatomic ground-state molecule in an arbitrary rovibrational level at large mutual separations is investigated with a general second-order perturbation theory, in the perspective of modeling the photoassociation between cold atoms and molecules. We find that the combination of quadrupole-quadrupole and van der Waals interactions competes with the rotational energy of the dimer, limiting the range of validity of the perturbative approach to distances larger than 100 Bohr radii. Numerical results are given for the long-range interaction between Cs and Cs{sub 2}, showing that the photoassociation is probably efficient for any Cs{sub 2} rotational energy.
Multiple testing for neuroimaging via hidden Markov random field.
Shu, Hai; Nan, Bin; Koeppe, Robert
2015-09-01
Traditional voxel-level multiple testing procedures in neuroimaging, mostly p-value based, often ignore the spatial correlations among neighboring voxels and thus suffer from substantial loss of power. We extend the local-significance-index based procedure originally developed for the hidden Markov chain models, which aims to minimize the false nondiscovery rate subject to a constraint on the false discovery rate, to three-dimensional neuroimaging data using a hidden Markov random field model. A generalized expectation-maximization algorithm for maximizing the penalized likelihood is proposed for estimating the model parameters. Extensive simulations show that the proposed approach is more powerful than conventional false discovery rate procedures. We apply the method to the comparison between mild cognitive impairment, a disease status with increased risk of developing Alzheimer's or another dementia, and normal controls in the FDG-PET imaging study of the Alzheimer's Disease Neuroimaging Initiative. PMID:26012881
Markov reliability models for digital flight control systems
NASA Technical Reports Server (NTRS)
Mcgough, John; Reibman, Andrew; Trivedi, Kishor
1989-01-01
The reliability of digital flight control systems can often be accurately predicted using Markov chain models. The cost of numerical solution depends on a model's size and stiffness. Acyclic Markov models, a useful special case, are particularly amenable to efficient numerical solution. Even in the general case, instantaneous coverage approximation allows the reduction of some cyclic models to more readily solvable acyclic models. After considering the solution of single-phase models, the discussion is extended to phased-mission models. Phased-mission reliability models are classified based on the state restoration behavior that occurs between mission phases. As an economical approach for the solution of such models, the mean failure rate solution method is introduced. A numerical example is used to show the influence of fault-model parameters and interphase behavior on system unreliability.
General Approach to First-Order Error Prediction in Rigid Point Registration
Fitzpatrick, J. Michael
2015-01-01
A general approach to the first-order analysis of error in rigid point registration is presented that accommodates fiducial localization error (FLE) that may be inhomogeneous (varying from point to point) and anisotropic (varying with direction) and also accommodates arbitrary weighting that may also be inhomogeneous and anisotropic. Covariances are derived for target registration error (TRE) and for weighted fiducial registration error (FRE) in terms of covariances of FLE, culminating in a simple implementation that encompasses all combinations of weightings and anisotropy. Furthermore, it is shown that for ideal weighting, in which the weighting matrix for each fiducial equals the inverse of the square root of the cross covariance of its two-space FLE, fluctuations of FRE and TRE are mutually independent. These results are validated by comparison with previously published expressions and by simulation. Furthermore, simulations for randomly generated fiducial positions and FLEs are presented that show that correlation is negligible correlation coefficient < 0.1 in the exact case for both ideal and uniform weighting (i.e., no weighting), the latter of which is employed in commercial surgical guidance systems. From these results we conclude that for these weighting schemes, while valid expressions exist relating the covariance of FRE to the covariance of TRE, there are no measures of the goodness of fit of the fiducials for a given registration that give to first order any information about the fluctuation of TRE from its expected value and none that give useful information in the exact case. Therefore, as estimators of registration accuracy, such measures should be approached with extreme caution both by the purveyors of guidance systems and by the practitioners who use them. PMID:21075718
Investigation of Commuting Hamiltonian in Quantum Markov Network
NASA Astrophysics Data System (ADS)
Jouneghani, Farzad Ghafari; Babazadeh, Mohammad; Bayramzadeh, Rogayeh; Movla, Hossein
2014-08-01
Graphical Models have various applications in science and engineering which include physics, bioinformatics, telecommunication and etc. Usage of graphical models needs complex computations in order to evaluation of marginal functions, so there are some powerful methods including mean field approximation, belief propagation algorithm and etc. Quantum graphical models have been recently developed in context of quantum information and computation, and quantum statistical physics, which is possible by generalization of classical probability theory to quantum theory. The main goal of this paper is preparing a primary generalization of Markov network, as a type of graphical models, to quantum case and applying in quantum statistical physics. We have investigated the Markov network and the role of commuting Hamiltonian terms in conditional independence with simple examples of quantum statistical physics.
NASA Astrophysics Data System (ADS)
Kroll, Herbert; Schlenz, Hartmut; Phillips, Michael W.
1994-12-01
The excess Gibbs free energy due to non-convergent ordering is described by a Landau expansion in which configurational and non-configurational entropy contributions are separated: 269_2004_Article_BF00203931_TeX2GIFE1.gif G^L = - hQ_t + tfrac{1}{2}a^* (T - T_c^* )Q_t^2 + tfrac{1}{n}e_n Q_t^n - TS_{conf \\cdot }^{ord} Neglecting higher order terms in Q t t, this expansion is formally equivalent to the reciprocal solution model for the distribution of Fe2+ and Mg over the non-equivalent M1 and M2 sites of orthopyroxenes: 269_2004_Article_BF00203931_TeX2GIFE2.gif begin{gathered} G^{ord} = - tfrac{1}{2}[Δ G_{exch}^0 - (L_{M1}^G - L_{M2}^G )X] + Q_t \\ {text{ + }}tfrac{1}{4}[Δ G_{rec}^0 - (L_{M2}^G - L_{M1}^G )] + Q_t^2 {text{ - }}TS_{{text{conf}}^ \\cdot }^{{text{ord}}} \\ The Q t term describes a temperature and composition-dependent thermodynamic field that prevents the crystal from attaining full disorder at a finite temperature. The X term models the dependence of the field on composition. It causes the isotherms in a Roozeboom diagram X{Fe/M2}vs. X{Fe/M1}to be asymmetric. The Q{t/2}term incorporates nearest-neighbour interactions. Higher order interactions are accounted for by the Q{t/n}term, which is not routinely foreseen in the reciprocal solution model. The critical temperature T{c/*}is interpreted as a ratio of enthalpy and entropy contributions to the free energy, ΔG{rec/0}, of a reciprocal reaction 269_2004_Article_BF00203931_TeX2GIFE3.gif T_c^* = {Δ H_{rec}^0 - (L_{M1}^H + L_{M2}^H )}/{Δ S_{rec^0 - (L_{M1}^S + L_{M2}^S )}}. The comparison of Landau and classical approaches is extended to convergent ordering models which are shown to be incorporated in expressions for non-convergent ordering.
Taheri, Shahrooz; Mat Saman, Muhamad Zameri; Wong, Kuan Yew
2013-01-01
One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach. PMID:23864823
NASA Astrophysics Data System (ADS)
Kuwatani, Tatsu; Nagata, Kenji; Okada, Masato; Toriumi, Mitsuhiro
2012-03-01
The chemical zoning profile in metamorphic minerals is often used to deduce the pressure-temperature ( P- T) history of rock. However, it remains difficult to restore detailed paths from zoned minerals because thermobarometric evaluation of metamorphic conditions involves several uncertainties, including measurement errors and geological noise. We propose a new stochastic framework for estimating precise P- T paths from a chemical zoning structure using the Markov random field (MRF) model, which is a type of Bayesian stochastic method that is often applied to image analysis. The continuity of pressure and temperature during mineral growth is incorporated by Gaussian Markov chains as prior probabilities in order to apply the MRF model to the P- T path inversion. The most probable P- T path can be obtained by maximizing the posterior probability of the sequential set of P and T given the observed compositions of zoned minerals. Synthetic P- T inversion tests were conducted in order to investigate the effectiveness and validity of the proposed model from zoned Mg-Fe-Ca garnet in the divariant KNCFMASH system. In the present study, the steepest descent method was implemented in order to maximize the posterior probability using the Markov chain Monte Carlo algorithm. The proposed method successfully reproduced the detailed shape of the synthetic P- T path by eliminating appropriately the statistical compositional noises without operator's subjectivity and prior knowledge. It was also used to simultaneously evaluate the uncertainty of pressure, temperature, and mineral compositions for all measurement points. The MRF method may have potential to deal with several geological uncertainties, which cause cumbersome systematic errors, by its Bayesian approach and flexible formalism, so that it comprises potentially powerful tools for various inverse problems in petrology.
Entropy and long-range memory in random symbolic additive Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2016-06-01
The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.
Entropy and long-range memory in random symbolic additive Markov chains.
Melnik, S S; Usatenko, O V
2016-06-01
The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory. PMID:27415245
NASA Astrophysics Data System (ADS)
Turner, Sean; Galelli, Stefano; Wilcox, Karen
2015-04-01
Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating
Dynamic Programming for Structured Continuous Markov Decision Problems
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu
2004-01-01
We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.
Markov decision processes in natural resources management: Observability and uncertainty
Williams, B.K.
2009-01-01
The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.
Markov decision processes in natural resources management: observability and uncertainty
Williams, Byron K.
2015-01-01
The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.
Zhou, De; Lin, Zhulu; Liu, Liming
2012-11-15
Land salinization and desalinization are complex processes affected by both biophysical and human-induced driving factors. Conventional approaches of land salinization assessment and simulation are either too time consuming or focus only on biophysical factors. The cellular automaton (CA)-Markov model, when coupled with spatial pattern analysis, is well suited for regional assessments and simulations of salt-affected landscapes since both biophysical and socioeconomic data can be efficiently incorporated into a geographic information system framework. Our hypothesis set forth that the CA-Markov model can serve as an alternative tool for regional assessment and simulation of land salinization or desalinization. Our results suggest that the CA-Markov model, when incorporating biophysical and human-induced factors, performs better than the model which did not account for these factors when simulating the salt-affected landscape of the Yinchuan Plain (China) in 2009. In general, the CA-Markov model is best suited for short-term simulations and the performance of the CA-Markov model is largely determined by the availability of high-quality, high-resolution socioeconomic data. The coupling of the CA-Markov model with spatial pattern analysis provides an improved understanding of spatial and temporal variations of salt-affected landscape changes and an option to test different soil management scenarios for salinity management. PMID:23085467
Nahrwold, Sophie; Berger, Robert
2009-06-01
In this paper, a quasirelativistic two-component zeroth order regular approximation (ZORA) density functional theory (DFT) approach to the calculation of parity violating (PV) resonance frequency differences between the nuclear magnetic resonance (NMR) spectra of enantiomers is presented and the systematics of PV NMR shielding constants in C(2)-symmetric dihydrogen dichalcogenides (H(2)X(2) with X=(17)O, (33)S, (77)Se, (125)Te, (209)Po) are investigated. The typical sin(2alpha)-like dependence of the PV NMR frequency splittings on the dihedral angle alpha is observed for the entire series. As for the scaling behavior of the effect with the nuclear charge Z of X, the previously reported Z(2.5+/-0.5) scaling in the nonrelativistic limit is reproduced and a scaling of approximately Z(3) for the paramagnetic and Z(5) for the spin-orbit coupling contribution to the frequency splitting is observed in the relativistic framework. The paramagnetic and spin-orbit coupling contributions are typically of opposite sign for the molecular structures studied herein and the maximum scaling of the total ZORA frequency splitting (i.e., the sum of the two contributions) is Z(3.9) for H(2)Po(2). Thus, an earlier claim for a spin-orbit coupling contribution scaling with up to Z(7) for H(2)Po(2) and the erratic dihedral angle dependence obtained for this compound within a four-component Dirac-Hartree-Fock-Coulomb study is not confirmed at the DFT level. The maximum NMR frequency splitting reported here is of the order of 10 mHz for certain clamped conformations of H(2)Po(2) inside a static magnetic field with magnetic flux density of 11.7 T. Frequency splittings of this size have been estimated to be detectable with present day NMR spectrometers. Thus, a NMR route toward molecular PV appears promising once suitable compounds have been identified. PMID:19508050
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
Markov Boundary Discovery with Ridge Regularized Linear Models
Visweswaran, Shyam
2016-01-01
Ridge regularized linear models (RRLMs), such as ridge regression and the SVM, are a popular group of methods that are used in conjunction with coefficient hypothesis testing to discover explanatory variables with a significant multivariate association to a response. However, many investigators are reluctant to draw causal interpretations of the selected variables due to the incomplete knowledge of the capabilities of RRLMs in causal inference. Under reasonable assumptions, we show that a modified form of RRLMs can get “very close” to identifying a subset of the Markov boundary by providing a worst-case bound on the space of possible solutions. The results hold for any convex loss, even when the underlying functional relationship is nonlinear, and the solution is not unique. Our approach combines ideas in Markov boundary and sufficient dimension reduction theory. Experimental results show that the modified RRLMs are competitive against state-of-the-art algorithms in discovering part of the Markov boundary from gene expression data. PMID:27170915
Triangular Alignment (TAME). A Tensor-based Approach for Higher-order Network Alignment
Mohammadi, Shahin; Gleich, David F.; Kolda, Tamara G.; Grama, Ananth
2015-11-01
Network alignment is an important tool with extensive applications in comparative interactomics. Traditional approaches aim to simultaneously maximize the number of conserved edges and the underlying similarity of aligned entities. We propose a novel formulation of the network alignment problem that extends topological similarity to higher-order structures and provide a new objective function that maximizes the number of aligned substructures. This objective function corresponds to an integer programming problem, which is NP-hard. Consequently, we approximate this objective function as a surrogate function whose maximization results in a tensor eigenvalue problem. Based on this formulation, we present an algorithm called Triangular AlignMEnt (TAME), which attempts to maximize the number of aligned triangles across networks. We focus on alignment of triangles because of their enrichment in complex networks; however, our formulation and resulting algorithms can be applied to general motifs. Using a case study on the NAPABench dataset, we show that TAME is capable of producing alignments with up to 99% accuracy in terms of aligned nodes. We further evaluate our method by aligning yeast and human interactomes. Our results indicate that TAME outperforms the state-of-art alignment methods both in terms of biological and topological quality of the alignments.
Algorithms for Discovery of Multiple Markov Boundaries
Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.
2013-01-01
Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052
ERIC Educational Resources Information Center
Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio
2016-01-01
We extend to the longitudinal setting a latent class approach that was recently introduced by Lanza, Coffman, and Xu to estimate the causal effect of a treatment. The proposed approach enables an evaluation of multiple treatment effects on subpopulations of individuals from a dynamic perspective, as it relies on a latent Markov (LM) model that is…
Nese, J.M. ); Dutton, J.A. )
1993-02-01
A dynamical systems approach is used to quantify the predictability of weather and climatic states of a low-order, moist general circulation model. The effects on predictability of incorporating a simple oceanic circulation are evaluated. The predictability and structure of the model attractors are compared using Lyapunov exponents, local divergence rates, and the correlation and Lyapunov dimensions. Lyapunov exponents quantify global predictability by measuring the mean rate of growth of small perturbations on an attractor, while local divergence rates quantify temporal variations of this error growth rate and thus measure local, or instantaneous, predictability. Activating an oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10% while decreasing the variance of the largest local divergence rate by 20%. The correlation dimension of the attractor decreases slightly when an oceanic circulation is activated, while the Lyapunov dimension decreases more significantly because it depends directly on the Lyapunov exponents. The average predictability of annually averaged states is improved by 25% when an oceanic circulation develops, and the variance of the largest local divergence rate also decreases by 25%. One-third of the yearly averaged states have local error doubling times larger than 2 years. The dimensions of the attractors of the yearly averaged states are not significantly different than the dimensions of the attractors of the original model. The most important contribution of this article is the demonstration that the local divergence rates provide a concise quantification of the variations of predictability on attractors and an efficient basis for comparing their local predictability characteristics. Local divergence rates might be computed to provide a real-time estimate of local predictability to accompany an operational forecast.
NASA Astrophysics Data System (ADS)
Yang, Y.; Min, Y.; Jun, Y.
2012-12-01
structural components (e.g., Al-O-Si and Si-O-Si linkages) may serve as better base units than minerals (i.e., the pH dependence of Al-O-Si breakdown may be more useful than the pH dependence of Si release rate from any specific mineral). Second, Al/Si ordering is expected to show effects on the structure of interfacial layers formed during water-rock interactions, because from the mass-balance perspective, the interfacial layer is inherently related to dissolution incongruency due to the elemental reactivity differences. The fact that the incongruency is quantifiable using crystallographic parameters may suggest that the formation of the interfacial layer should at least be partially attributable to intrinsic non-stoichiometric dissolution (instead of secondary phase formation). Our approach provides a new means to connect atomic scale structural properties of a mineral to its macroscale dissolution behaviors.
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
NASA Astrophysics Data System (ADS)
Grinfeld, Michael; Knight, Philip A.; Wade, Andrew R.
2012-01-01
We study a class of Markovian systems of N elements taking values in [0,1] that evolve in discrete time t via randomized replacement rules based on the ranks of the elements. These rank-driven processes are inspired by variants of the Bak-Sneppen model of evolution, in which the system represents an evolutionary `fitness landscape' and which is famous as a simple model displaying self-organized criticality. Our main results are concerned with long-time large- N asymptotics for the general model in which, at each time step, K randomly chosen elements are discarded and replaced by independent U[0,1] variables, where the ranks of the elements to be replaced are chosen, independently at each time step, according to a distribution κ N on {1,2,…, N} K . Our main results are that, under appropriate conditions on κ N , the system exhibits threshold behavior at s ∗∈[0,1], where s ∗ is a function of κ N , and the marginal distribution of a randomly selected element converges to U[ s ∗,1] as t→∞ and N→∞. Of this class of models, results in the literature have previously been given for special cases only, namely the `mean-field' or `random neighbor' Bak-Sneppen model. Our proofs avoid the heuristic arguments of some of the previous work and use Foster-Lyapunov ideas. Our results extend existing results and establish their natural, more general context. We derive some more specialized results for the particular case where K=2. One of our technical tools is a result on convergence of stationary distributions for families of uniformly ergodic Markov chains on increasing state-spaces, which may be of independent interest.
Zipf exponent of trajectory distribution in the hidden Markov model
NASA Astrophysics Data System (ADS)
Bochkarev, V. V.; Lerner, E. Yu
2014-03-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.
Hidden Markov Modeling for Weigh-In-Motion Estimation
Abercrombie, Robert K; Ferragut, Erik M; Boone, Shane
2012-01-01
This paper describes a hidden Markov model to assist in the weight measurement error that arises from complex vehicle oscillations of a system of discrete masses. Present reduction of oscillations is by a smooth, flat, level approach and constant, slow speed in a straight line. The model uses this inherent variability to assist in determining the true total weight and individual axle weights of a vehicle. The weight distribution dynamics of a generic moving vehicle were simulated. The model estimation converged to within 1% of the true mass for simulated data. The computational demands of this method, while much greater than simple averages, took only seconds to run on a desktop computer.
AIRWAY LABELING USING A HIDDEN MARKOV TREE MODEL
Ross, James C.; Díaz, Alejandro A.; Okajima, Yuka; Wassermann, Demian; Washko, George R.; Dy, Jennifer; San José Estépar, Raúl
2014-01-01
We present a novel airway labeling algorithm based on a Hidden Markov Tree Model (HMTM). We obtain a collection of discrete points along the segmented airway tree using particles sampling [1] and establish topology using Kruskal’s minimum spanning tree algorithm. Following this, our HMTM algorithm probabilistically assigns labels to each point. While alternative methods label airway branches out to the segmental level, we describe a general method and demonstrate its performance out to the subsubsegmental level (two generations further than previously published approaches). We present results on a collection of 25 computed tomography (CT) datasets taken from a Chronic Obstructive Pulmonary Disease (COPD) study. PMID:25436039
Improved Hidden-Markov-Model Method Of Detecting Faults
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J.
1994-01-01
Method of automated, continuous monitoring to detect faults in complicated dynamic system based on hidden-Markov-model (HMM) approach. Simpler than another, recently proposed HMM method, but retains advantages of that method, including low susceptibility to false alarms, no need for mathematical model of dynamics of system under normal or faulty conditions, and ability to detect subtle changes in characteristics of monitored signals. Examples of systems monitored by use of this method include motors, turbines, and pumps critical in their applications; chemical-processing plants; powerplants; and biomedical systems.
On Construction of Quantum Markov Chains on Cayley trees
NASA Astrophysics Data System (ADS)
Accardi, Luigi; Mukhamedov, Farrukh; Souissi, Abdessatar
2016-03-01
The main aim of the present paper is to provide a new construction of quantum Markov chain (QMC) on arbitrary order Cayley tree. In that construction, a QMC is defined as a weak limit of finite volume states with boundary conditions, i.e. QMC depends on the boundary conditions. Note that this construction reminds statistical mechanics models with competing interactions on trees. If one considers one dimensional tree, then the provided construction reduces to well-known one, which was studied by the first author. Our construction will allow to investigate phase transition problem in a quantum setting.
On multitarget pairwise-Markov models
NASA Astrophysics Data System (ADS)
Mahler, Ronald
2015-05-01
Single- and multi-target tracking are both typically based on strong independence assumptions regarding both the target states and sensor measurements. In particular, both are theoretically based on the hidden Markov chain (HMC) model. That is, the target process is a Markov chain that is observed by an independent observation process. Since HMC assumptions are invalid in many practical applications, the pairwise Markov chain (PMC) model has been proposed as a way to weaken those assumptions. In this paper it is shown that the PMC model can be directly generalized to multitarget problems. Since the resulting tracking filters are computationally intractable, the paper investigates generalizations of the cardinalized probability hypothesis density (CPHD) filter to applications with PMC models.
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
Closed-form solution for loop transfer recovery via reduced-order observers
NASA Technical Reports Server (NTRS)
Bacon, Barton J.
1989-01-01
A well-known property of the reduced-order observer is exploited to obtain the controller solution of the loop transfer recovery problem. In that problem, the controller is sought that generates some desired loop shape at the plant's input or output channels. Past approaches to this problem have typically yielded controllers generating loop shapes that only converge pointwise to the desired loop shape. In the proposed approach, however, the solution (at the input) is obtained directly when the plant's first Markov parameter is full rank. In the more general case when the plant's first Markov parameter is not full rank, the solution is obtained in an analogous manner by appending a special set of input and output signals to the original set. A dual form of the reduced-order observer is shown to yield the LTR solution at the output channel.
Willems, Charlotte; Saija, Jefta D.; Akyürek, Elkan G.; Martens, Sander
2016-01-01
Background The reduced ability to identify a second target when it is presented in close temporal succession of a first target is called the attentional blink (AB). Studies have shown large individual differences in AB task performance, where lower task performance has been associated with more reversed order reports of both targets if these were presented in direct succession. In order to study the suggestion that reversed order reports reflect loss of temporal information, in the current study, we investigated whether individuals with a larger AB have a higher tendency to temporally integrate both targets into one visual event by using an AB paradigm containing symbol target stimuli. Methodology/Principal Findings Indeed, we found a positive relation between the tendency to temporally integrate information and individual AB magnitude. In contrast to earlier work, we found no relation between order reversals and individual AB magnitude. The occurrence of temporal integration was negatively related to the number of order reversals, indicating that individuals either integrated or separated and reversed information. Conclusion We conclude that individuals with better AB task performance use a shorter time window to integrate information, and therefore have higher preservation of temporal information. Furthermore, order reversals observed in paradigms with alphanumeric targets indeed seem to at least partially reflect temporal integration of both targets. Given the negative relation between temporal integration and ‘true’ order reversals observed with the current symbolic target set, these two behavioral outcomes seem to be two sides of the same coin. PMID:27228118
Highly ordered nanocomposites via a monomer self-assembly in situ condensation approach
Gin, Douglas L.; Fischer, Walter M.; Gray, David H.; Smith, Ryan C.
1998-01-01
A method for synthesizing composites with architectural control on the nanometer scale is described. A polymerizable lyotropic liquid-crystalline monomer is used to form an inverse hexagonal phase in the presence of a second polymer precursor solution. The monomer system acts as an organic template, providing the underlying matrix and order of the composite system. Polymerization of the template in the presence of an optional cross-linking agent with retention of the liquid-crystalline order is carried out followed by a second polymerization of the second polymer precursor within the channels of the polymer template to provide an ordered nanocomposite material.
Highly ordered nanocomposites via a monomer self-assembly in situ condensation approach
Gin, D.L.; Fischer, W.M.; Gray, D.H.; Smith, R.C.
1998-12-15
A method for synthesizing composites with architectural control on the nanometer scale is described. A polymerizable lyotropic liquid-crystalline monomer is used to form an inverse hexagonal phase in the presence of a second polymer precursor solution. The monomer system acts as an organic template, providing the underlying matrix and order of the composite system. Polymerization of the template in the presence of an optional cross-linking agent with retention of the liquid-crystalline order is carried out followed by a second polymerization of the second polymer precursor within the channels of the polymer template to provide an ordered nanocomposite material. 13 figs.
An Evolutionary Approach for Joint Blind Multichannel Estimation and Order Detection
NASA Astrophysics Data System (ADS)
Fangjiong, Chen; Kwong, Sam; Gang, Wei
2003-12-01
A joint blind order-detection and parameter-estimation algorithm for a single-input multiple-output (SIMO) channel is presented. Based on the subspace decomposition of the channel output, an objective function including channel order and channel parameters is proposed. The problem is resolved by using a specifically designed genetic algorithm (GA). In the proposed GA, we encode both the channel order and parameters into a single chromosome, so they can be estimated simultaneously. Novel GA operators and convergence criteria are used to guarantee correct and high convergence speed. Simulation results show that the proposed GA achieves satisfactory convergence speed and performance.
On Markov Earth Mover’s Distance
Wei, Jie
2015-01-01
In statistics, pattern recognition and signal processing, it is of utmost importance to have an effective and efficient distance to measure the similarity between two distributions and sequences. In statistics this is referred to as goodness-of-fit problem. Two leading goodness of fit methods are chi-square and Kolmogorov–Smirnov distances. The strictly localized nature of these two measures hinders their practical utilities in patterns and signals where the sample size is usually small. In view of this problem Rubner and colleagues developed the earth mover’s distance (EMD) to allow for cross-bin moves in evaluating the distance between two patterns, which find a broad spectrum of applications. EMD-L1 was later proposed to reduce the time complexity of EMD from super-cubic by one order of magnitude by exploiting the special L1 metric. EMD-hat was developed to turn the global EMD to a localized one by discarding long-distance earth movements. In this work, we introduce a Markov EMD (MEMD) by treating the source and destination nodes absolutely symmetrically. In MEMD, like hat-EMD, the earth is only moved locally as dictated by the degree d of neighborhood system. Nodes that cannot be matched locally is handled by dummy source and destination nodes. By use of this localized network structure, a greedy algorithm that is linear to the degree d and number of nodes is then developed to evaluate the MEMD. Empirical studies on the use of MEMD on deterministic and statistical synthetic sequences and SIFT-based image retrieval suggested encouraging performances. PMID:25983362
A novel approach toward fuzzy generalized bi-ideals in ordered semigroups.
Khan, Faiz Muhammad; Sarmin, Nor Haniza; Khan, Hidayat Ullah
2014-01-01
In several advanced fields like control engineering, computer science, fuzzy automata, finite state machine, and error correcting codes, the use of fuzzified algebraic structures especially ordered semigroups plays a central role. In this paper, we introduced a new and advanced generalization of fuzzy generalized bi-ideals of ordered semigroups. These new concepts are supported by suitable examples. These new notions are the generalizations of ordinary fuzzy generalized bi-ideals of ordered semigroups. Several fundamental theorems of ordered semigroups are investigated by the properties of these newly defined fuzzy generalized bi-ideals. Further, using level sets, ordinary fuzzy generalized bi-ideals are linked with these newly defined ideals which is the most significant part of this paper. PMID:24883375
A Quasi-Lie Schemes Approach to Second-Order Gambier Equations
NASA Astrophysics Data System (ADS)
Cariñena, José F.; Guha, Partha; de Lucas, Javier
2013-03-01
A quasi-Lie scheme is a geometric structure that provides t-dependent changes of variables transforming members of an associated family of systems of first-order differential equations into members of the same family. In this note we introduce two quasi-Lie schemes for studying second-order Gambier equations in a geometric way. This allows us to study the transformation of these equations into simpler canonical forms, which solves a gap in the previous literature, and other relevant differential equations, which leads to derive new constants of motion for families of second-order Gambier equations. Additionally, we describe general solutions of certain second-order Gambier equations in terms of particular solutions of Riccati equations, linear systems, and t-dependent frequency harmonic oscillators.
Entropy Computation in Partially Observed Markov Chains
NASA Astrophysics Data System (ADS)
Desbouvries, François
2006-11-01
Let X = {Xn}n∈N be a hidden process and Y = {Yn}n∈N be an observed process. We assume that (X,Y) is a (pairwise) Markov Chain (PMC). PMC are more general than Hidden Markov Chains (HMC) and yet enable the development of efficient parameter estimation and Bayesian restoration algorithms. In this paper we propose a fast (i.e., O(N)) algorithm for computing the entropy of {Xn}n=0N given an observation sequence {yn}n=0N.
MARKOV Model Application to Proliferation Risk Reduction of an Advanced Nuclear System
Bari,R.A.
2008-07-13
The Generation IV International Forum (GIF) emphasizes proliferation resistance and physical protection (PR&PP) as a main goal for future nuclear energy systems. The GIF PR&PP Working Group has developed a methodology for the evaluation of these systems. As an application of the methodology, Markov model has been developed for the evaluation of proliferation resistance and is demonstrated for a hypothetical Example Sodium Fast Reactor (ESFR) system. This paper presents the case of diversion by the facility owner/operator to obtain material that could be used in a nuclear weapon. The Markov model is applied to evaluate material diversion strategies. The following features of the Markov model are presented here: (1) An effective detection rate has been introduced to account for the implementation of multiple safeguards approaches at a given strategic point; (2) Technical failure to divert material is modeled as intrinsic barriers related to the design of the facility or the properties of the material in the facility; and (3) Concealment to defeat or degrade the performance of safeguards is recognized in the Markov model. Three proliferation risk measures are calculated directly by the Markov model: the detection probability, technical failure probability, and proliferation time. The material type is indicated by an index that is based on the quality of material diverted. Sensitivity cases have been done to demonstrate the effects of different modeling features on the measures of proliferation resistance.
Estimation with Right-Censored Observations Under A Semi-Markov Model
Hu, X. Joan
2013-01-01
The semi-Markov process often provides a better framework than the classical Markov process for the analysis of events with multiple states. The purpose of this paper is twofold. First, we show that in the presence of right censoring, when the right end-point of the support of the censoring time is strictly less than the right end-point of the support of the semi-Markov kernel, the transition probability of the semi-Markov process is nonidentifiable, and the estimators proposed in the literature are inconsistent in general. We derive the set of all attainable values for the transition probability based on the censored data, and we propose a nonparametric inference procedure for the transition probability using this set. Second, the conventional approach to constructing confidence bands is not applicable for the semi-Markov kernel and the sojourn time distribution. We propose new perturbation resampling methods to construct these confidence bands. Different weights and transformations are explored in the construction. We use simulation to examine our proposals and illustrate them with hospitalization data from a recent cancer survivor study. PMID:23874060
Markov analysis of stochastic resonance in a periodically driven integrate-and-fire neuron
NASA Astrophysics Data System (ADS)
Plesser, Hans E.; Geisel, Theo
1999-06-01
We model the dynamics of the leaky integrate-and-fire neuron under periodic stimulation as a Markov process with respect to the stimulus phase. This avoids the unrealistic assumption of a stimulus reset after each spike made in earlier papers and thus solves the long-standing reset problem. The neuron exhibits stochastic resonance, both with respect to input noise intensity and stimulus frequency. The latter resonance arises by matching the stimulus frequency to the refractory time of the neuron. The Markov approach can be generalized to other periodically driven stochastic processes containing a reset mechanism.
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (SUN VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (SUN VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a
NASA Astrophysics Data System (ADS)
Pan, Weichun; Kolomeisky, Anatoly B.; Vekilov, Peter G.
2005-05-01
Nucleation of ordered solid phases of proteins triggers numerous phenomena in laboratory, industry, and in healthy and sick organisms. Recent simulations and experiments with protein crystals suggest that the formation of an ordered crystalline nucleus is preceded by a disordered high-density cluster, akin to a droplet of high-density liquid that has been observed with some proteins; this mechanism allowed a qualitative explanation of recorded complex nucleation kinetics curves. Here, we present a simple phenomenological theory that takes into account intermediate high-density metastable states in the nucleation process. Nucleation rate data at varying temperature and protein concentration are reproduced with high fidelity using literature values of the thermodynamic and kinetic parameters of the system. Our calculations show that the growth rate of the near-critical and supercritical ordered clusters within the dense intermediate is a major factor for the overall nucleation rate. This highlights the role of viscosity within the dense intermediate for the formation of the ordered nucleus. The model provides an understanding of the action of additives that delay or accelerate nucleation and presents a framework within which the nucleation of other ordered protein solid phases, e.g., the sickle cell hemoglobin polymers, can be analyzed.
Non-Markov dissipative dynamics of electron transfer in a photosynthetic reaction center
NASA Astrophysics Data System (ADS)
Poddubnyy, V. V.; Glebov, I. O.; Eremin, V. V.
2014-02-01
We consider the dissipative dynamics of electron transfer in the photosynthetic reaction center of purple bacteria and propose a model where the transition between electron states arises only due to the interaction between a chromophore system and the protein environment and is not accompanied by the motion of nuclei of the reaction subsystem. We establish applicability conditions for the Markov approximation in the framework of this model and show that these conditions are not necessarily satisfied in the protein medium. We represent the spectral function of the "system+heat bath" interaction in the form of one or several Gaussian functions to study specific characteristics of non-Markov dynamics of the final state population, the presence of an induction period and vibrations. The consistency of the computational results obtained for non-Markov dynamics with experimental data confirms the correctness of the proposed approach.
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
A First and Second Order Moment Approach to Probabilistic Control Synthesis
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology based on the estimation of the first two order moments of the random variables and processes that describe the controlled response. Synthesis is performed by solving an multi-objective optimization problem where stability and performance requirements in time- and frequency domains are integrated. The use of the first two order moments allows for the efficient estimation of the cost function thus for a faster synthesis algorithm. While reliability requirements are taken into account by using bounds to failure probabilities, requirements related to undesirable variability are implemented by quantifying the concentration of the random outcome about a deterministic target. The Hammersley Sequence Sampling and the First- and Second-Moment- Second-Order approximations are used to estimate the moments, whose accuracy and associated computational complexity are compared numerically. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.
Robust controller designs for second-order dynamic systems - A virtual passive approach
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1991-01-01
A robust controller design is presented for second-order dynamic systems. The controller is model-independent and itself is a virtual second-order dynamic system. Conditions on actuator and sensor placements are identified for controller designs that guarantee overall closed-loop stability. The dynamic controller can be viewed as a virtual passive damping system that serves to stabilize the actual dynamic system. The control gians are interpreted as virtual mass, spring, and dashpot elements that play the same roles as actual physical elements in stability analysis. Position, velocity, and acceleration feedback are considered. Simple examples are provided to illustrate the physical meaning of this controller design.
Robust controller designs for second-order dynamic system: A virtual passive approach
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1990-01-01
A robust controller design is presented for second-order dynamic systems. The controller is model-independent and itself is a virtual second-order dynamic system. Conditions on actuator and sensor placements are identified for controller designs that guarantee overall closed-loop stability. The dynamic controller can be viewed as a virtual passive damping system that serves to stabilize the actual dynamic system. The control gains are interpreted as virtual mass, spring, and dashpot elements that play the same roles as actual physical elements in stability analysis. Position, velocity, and acceleration feedback are considered. Simple examples are provided to illustrate the physical meaning of this controller design.
Obesity status transitions across the elementary years: Use of Markov chain modeling
Technology Transfer Automated Retrieval System (TEKTRAN)
Overweight and obesity status transition probabilities using first-order Markov transition models applied to elementary school children were assessed. Complete longitudinal data across eleven assessments were available from 1,494 elementary school children (from 7,599 students in 41 out of 45 school...
Markov chains at the interface of combinatorics, computing, and statistical physics
NASA Astrophysics Data System (ADS)
Streib, Amanda Pascoe
The fields of statistical physics, discrete probability, combinatorics, and theoretical computer science have converged around efforts to understand random structures and algorithms. Recent activity in the interface of these fields has enabled tremendous breakthroughs in each domain and has supplied a new set of techniques for researchers approaching related problems. This thesis makes progress on several problems in this interface whose solutions all build on insights from multiple disciplinary perspectives. First, we consider a dynamic growth process arising in the context of DNA-based self-assembly. The assembly process can be modeled as a simple Markov chain. We prove that the chain is rapidly mixing for large enough bias in regions of Zd. The proof uses a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile, which arises in the nanotechnology application. Moreover, we use intuition from statistical physics to construct a choice of the biases for which the Markov chain Mmon requires exponential time to converge. Second, we consider a related problem regarding the convergence rate of biased permutations that arises in the context of self-organizing lists. The Markov chain Mnn in this case is a nearest-neighbor chain that allows adjacent transpositions, and the rate of these exchanges is governed by various input parameters. It was conjectured that the chain is always rapidly mixing when the inversion probabilities are positively biased, i.e., we put nearest neighbor pair x < y in order with bias 1/2 ≤ pxy ≤ 1 and out of order with bias 1 - pxy. The Markov chain Mmon was known to have connections to a simplified version of this biased card-shuffling. We provide new connections between Mnn and Mmon by using simple combinatorial bijections, and we prove that Mnn is
Trajectory classification using switched dynamical hidden Markov models.
Nascimento, Jacinto C; Figueiredo, Mario; Marques, Jorge S
2010-05-01
This paper proposes an approach for recognizing human activities (more specifically, pedestrian trajectories) in video sequences, in a surveillance context. A system for automatic processing of video information for surveillance purposes should be capable of detecting, recognizing, and collecting statistics of human activity, reducing human intervention as much as possible. In the method described in this paper, human trajectories are modeled as a concatenation of segments produced by a set of low level dynamical models. These low level models are estimated in an unsupervised fashion, based on a finite mixture formulation, using the expectation-maximization (EM) algorithm; the number of models is automatically obtained using a minimum message length (MML) criterion. This leads to a parsimonious set of models tuned to the complexity of the scene. We describe the switching among the low-level dynamic models by a hidden Markov chain; thus, the complete model is termed a switched dynamical hidden Markov model (SD-HMM). The performance of the proposed method is illustrated with real data from two different scenarios: a shopping center and a university campus. A set of human activities in both scenarios is successfully recognized by the proposed system. These experiments show the ability of our approach to properly describe trajectories with sudden changes. PMID:20051342
Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution
Djordjevic, Ivan B.
2015-01-01
Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually
Markov Jump Linear Systems-Based Position Estimation for Lower Limb Exoskeletons
Nogueira, Samuel L.; Siqueira, Adriano A. G.; Inoue, Roberto S.; Terra, Marco H.
2014-01-01
In this paper, we deal with Markov Jump Linear Systems-based filtering applied to robotic rehabilitation. The angular positions of an impedance-controlled exoskeleton, designed to help stroke and spinal cord injured patients during walking rehabilitation, are estimated. Standard position estimate approaches adopt Kalman filters (KF) to improve the performance of inertial measurement units (IMUs) based on individual link configurations. Consequently, for a multi-body system, like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link position estimation (e.g., the foot). In this paper, we propose a collective modeling of all inertial sensors attached to the exoskeleton, combining them in a Markovian estimation model in order to get the best information from each sensor. In order to demonstrate the effectiveness of our approach, simulation results regarding a set of human footsteps, with four IMUs and three encoders attached to the lower limb exoskeleton, are presented. A comparative study between the Markovian estimation system and the standard one is performed considering a wide range of parametric uncertainties. PMID:24451469
NASA Astrophysics Data System (ADS)
Hasunuma, Takumi; Kaneko, Tatsuya; Miyakoshi, Shohei; Ohta, Yukinori
2016-07-01
The variational cluster approximation is used to study the ground-state properties and single-particle spectra of the three-component fermionic Hubbard model defined on the two-dimensional square lattice at half filling. First, we show that either a paired Mott state or color-selective Mott state is realized in the paramagnetic system, depending on the anisotropy in the interaction strengths, except around the SU(3) symmetric point, where a paramagnetic metallic state is maintained. Then, by introducing Weiss fields to observe spontaneous symmetry breakings, we show that either a color-density-wave state or color-selective antiferromagnetic state is realized depending on the interaction anisotropy and that the first-order phase transition between these two states occurs at the SU(3) point. We moreover show that these staggered orders originate from the gain in potential energy (or Slater mechanism) near the SU(3) point but originate from the gain in kinetic energy (or Mott mechanism) when the interaction anisotropy is strong. The staggered orders near the SU(3) point disappear when the next-nearest-neighbor hopping parameters are introduced, indicating that these orders are fragile, protected only by the Fermi surface nesting.
Does Higher-Order Thinking Impinge on Learner-Centric Digital Approach?
ERIC Educational Resources Information Center
Mathew, Bincy; Raja, B. William Dharma
2015-01-01
Humans are social beings and the social cognition focuses on how one form impressions of other people, interpret the meaning of other people's behaviour and how people's behaviour is affected by our attitudes. The school provides complex social situations and in order to thrive, students must possess social cognition, the process of thinking about…
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
ERIC Educational Resources Information Center
DeSarbo, Wayne S.; Park, Joonwook; Scott, Crystal J.
2008-01-01
A cyclical conditional maximum likelihood estimation procedure is developed for the multidimensional unfolding of two- or three-way dominance data (e.g., preference, choice, consideration) measured on ordered successive category rating scales. The technical description of the proposed model and estimation procedure are discussed, as well as the…
Markov Chain Estimation of Avian Seasonal Fecundity
To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem
Birth order effects on the separation process in young adults: an evolutionary and dynamic approach.
Ziv, Ido; Hermel, Orly
2011-01-01
The present study analyzes the differential contribution of a familial or social focus in imaginative ideation (the personal fable and imagined audience mental constructs) to the separation-individuation process of firstborn, middleborn, and lastborn children. A total of 160 young adults were divided into 3 groups by birth order. Participants' separation-individuation process was evaluated by the Psychological Separation Inventory, and results were cross-validated by the Pathology of Separation-Individuation Inventory. The Imaginative Ideation Inventory tested the relative dominance of the familial and social environments in participants' mental constructs. The findings showed that middleborn children had attained more advanced separation and were lower in family-focused ideation and higher in nonfamilial social ideation. However, the familial and not the social ideation explained the variance in the separation process in all the groups. The findings offer new insights into the effects of birth order on separation and individuation in adolescents and young adults. PMID:21977689
Universal order parameters and quantum phase transitions: a finite-size approach.
Shi, Qian-Qian; Zhou, Huan-Qiang; Batchelor, Murray T
2015-01-01
We propose a method to construct universal order parameters for quantum phase transitions in many-body lattice systems. The method exploits the H-orthogonality of a few near-degenerate lowest states of the Hamiltonian describing a given finite-size system, which makes it possible to perform finite-size scaling and take full advantage of currently available numerical algorithms. An explicit connection is established between the fidelity per site between two H-orthogonal states and the energy gap between the ground state and low-lying excited states in the finite-size system. The physical information encoded in this gap arising from finite-size fluctuations clarifies the origin of the universal order parameter. We demonstrate the procedure for the one-dimensional quantum formulation of the q-state Potts model, for q = 2, 3, 4 and 5, as prototypical examples, using finite-size data obtained from the density matrix renormalization group algorithm. PMID:25567585
Nexus Between Protein–Ligand Affinity Rank-Ordering, Biophysical Approaches, and Drug Discovery
2013-01-01
The confluence of computational and biophysical methods to accurately rank-order the binding affinities of small molecules and determine structures of macromolecular complexes is a potentially transformative advance in the work flow of drug discovery. This viewpoint explores the impact that advanced computational methods may have on the efficacy of small molecule drug discovery and optimization, particularly with respect to emerging fragment-based methods. PMID:24900579
A general approach for high order absorbing boundary conditions for the Helmholtz equation
NASA Astrophysics Data System (ADS)
Zarmi, Asaf; Turkel, Eli
2013-06-01
When solving a scattering problem in an unbounded space, one needs to implement the Sommerfeld condition as a boundary condition at infinity, to ensure no energy penetrates the system. In practice, solving a scattering problem involves truncating the region and implementing a boundary condition on an artificial outer boundary. Bayliss, Gunzburger and Turkel (BGT) suggested an Absorbing Boundary Condition (ABC) as a sequence of operators aimed at annihilating elements from the solution's series representation. Their method was practical only up to a second order condition. Later, Hagstrom and Hariharan (HH) suggested a method which used auxiliary functions and enabled implementation of higher order conditions. We compare various absorbing boundary conditions (ABCs) and introduce a new method to construct high order ABCs, generalizing the HH method. We then derive from this general method ABCs based on different series representations of the solution to the Helmholtz equation - in polar, elliptical and spherical coordinates. Some of these ABCs are generalizations of previously constructed ABCs and some are new. These new ABCs produce accurate solutions to the Helmholtz equation, which are much less dependent on the various parameters of the problem, such as the value of k, or the eccentricity of the ellipse. In addition to constructing new ABCs, our general method sheds light on the connection between various ABCs. Computations are presented to verify the high accuracy of these new ABCs.
Quevedo González, Fernando José; Nuño, Natalia
2016-06-01
The mechanical properties of well-ordered porous materials are related to their geometrical parameters at the mesoscale. Finite element (FE) analysis is a powerful tool to design well-ordered porous materials by analysing the mechanical behaviour. However, FE models are often computationally expensive. This article aims to develop a cost-effective FE model to simulate well-ordered porous metallic materials for orthopaedic applications. Solid and beam FE modelling approaches are compared, using finite size and infinite media models considering cubic unit cell geometry. The model is then applied to compare two unit cell geometries: cubic and diamond. Models having finite size provide similar results than the infinite media model approach for large sample sizes. In addition, these finite size models also capture the influence of the boundary conditions on the mechanical response for small sample sizes. The beam FE modelling approach showed little computational cost and similar results to the solid FE modelling approach. Diamond unit cell geometry appeared to be more suitable for orthopaedic applications than the cubic unit cell geometry. PMID:26260268
A Kramers-Moyal Approach to the Analysis of Third-Order Noise with Applications in Option Valuation
Popescu, Dan M.; Lipan, Ovidiu
2015-01-01
We propose the use of the Kramers-Moyal expansion in the analysis of third-order noise. In particular, we show how the approach can be applied in the theoretical study of option valuation. Despite Pawula’s theorem, which states that a truncated model may exhibit poor statistical properties, we show that for a third-order Kramers-Moyal truncation model of an option’s and its underlier’s price, important properties emerge: (i) the option price can be written in a closed analytical form that involves the Airy function, (ii) the price is a positive function for positive skewness in the distribution, (iii) for negative skewness, the price becomes negative only for price values that are close to zero. Moreover, using third-order noise in option valuation reveals additional properties: (iv) the inconsistencies between two popular option pricing approaches (using a “delta-hedged” portfolio and using an option replicating portfolio) that are otherwise equivalent up to the second moment, (v) the ability to develop a measure R of how accurately an option can be replicated by a mixture of the underlying stocks and cash, (vi) further limitations of second-order models revealed by introducing third-order noise. PMID:25625856
A Kramers-Moyal approach to the analysis of third-order noise with applications in option valuation.
Popescu, Dan M; Lipan, Ovidiu
2015-01-01
We propose the use of the Kramers-Moyal expansion in the analysis of third-order noise. In particular, we show how the approach can be applied in the theoretical study of option valuation. Despite Pawula's theorem, which states that a truncated model may exhibit poor statistical properties, we show that for a third-order Kramers-Moyal truncation model of an option's and its underlier's price, important properties emerge: (i) the option price can be written in a closed analytical form that involves the Airy function, (ii) the price is a positive function for positive skewness in the distribution, (iii) for negative skewness, the price becomes negative only for price values that are close to zero. Moreover, using third-order noise in option valuation reveals additional properties: (iv) the inconsistencies between two popular option pricing approaches (using a "delta-hedged" portfolio and using an option replicating portfolio) that are otherwise equivalent up to the second moment, (v) the ability to develop a measure R of how accurately an option can be replicated by a mixture of the underlying stocks and cash, (vi) further limitations of second-order models revealed by introducing third-order noise. PMID:25625856
Learning a Markov Logic network for supervised gene regulatory network inference
2013-01-01
Background Gene regulatory network inference remains a challenging problem in systems biology despite the numerous approaches that have been proposed. When substantial knowledge on a gene regulatory network is already available, supervised network inference is appropriate. Such a method builds a binary classifier able to assign a class (Regulation/No regulation) to an ordered pair of genes. Once learnt, the pairwise classifier can be used to predict new regulations. In this work, we explore the framework of Markov Logic Networks (MLN) that combine features of probabilistic graphical models with the expressivity of first-order logic rules. Results We propose to learn a Markov Logic network, e.g. a set of weighted rules that conclude on the predicate “regulates”, starting from a known gene regulatory network involved in the switch proliferation/differentiation of keratinocyte cells, a set of experimental transcriptomic data and various descriptions of genes all encoded into first-order logic. As training data are unbalanced, we use asymmetric bagging to learn a set of MLNs. The prediction of a new regulation can then be obtained by averaging predictions of individual MLNs. As a side contribution, we propose three in silico tests to assess the performance of any pairwise classifier in various network inference tasks on real datasets. A first test consists of measuring the average performance on balanced edge prediction problem; a second one deals with the ability of the classifier, once enhanced by asymmetric bagging, to update a given network. Finally our main result concerns a third test that measures the ability of the method to predict regulations with a new set of genes. As expected, MLN, when provided with only numerical discretized gene expression data, does not perform as well as a pairwise SVM in terms of AUPR. However, when a more complete description of gene properties is provided by heterogeneous sources, MLN achieves the same performance as a black
NASA Technical Reports Server (NTRS)
Pasha, M. A.; Dazzo, J. J.; Silverthorn, J. T.
1982-01-01
An investigation of approach and landing longitudinal flying qualities, based on data generated using a variable stability NT-33 aircraft combined with significant control system dynamics is described. An optimum pilot lead time for pitch tracking, flight path angle tracking, and combined pitch and flight path angle tracking tasks is determined from a closed loop simulation using integral squared error (ISE) as a performance measure. Pilot gain and lead time were varied in the closed loop simulation of the pilot and aircraft to obtain the best performance for different control system configurations. The results lead to the selection of an optimum lead time using ISE as a performance criterion. Using this value of optimum lead time, a correlation is then found between pilot rating and performance with changes in the control system and in the aircraft dynamics. It is also shown that pilot rating is closely related to pilot workload which, in turn, is related to the amount of lead which the pilot must generate to obtain satisfactory response. The results also indicate that the pilot may use pitch angle tracking for the approach task and then add flight path angle tracking for the flare and touchdown.
First-order neutron-deuteron scattering in a three-dimensional approach
NASA Astrophysics Data System (ADS)
Topolnicki, K.; Golak, J.; Skibiński, R.; Witała, H.; Bertulani, C. A.
2015-10-01
The description of the neutron-deuteron scattering process has been possible using the partial wave approach since the 1980s (Few-Body Syst. 3, 123 (1988); Phys. Rep. 274, 107 (1996); Acta Phys. Pol. B 28, 1677 (1997)). In recent years the so-called "three-dimensional" formalism was developed, where the calculations are performed with operators acting directly on the three-dimensional degrees of freedom of the nucleons. This approach avoids a tedious step of the classical calculations, the partial wave decomposition of operators, and in this paper is applied to the neutron-deuteron scattering process. The calculations presented here are a first step toward a new calculation scheme that would make it possible to easily produce precise predictions for a wide range of nuclear force models. This paper is a continuation of the work presented in Eur. Phys. J. A 43, 339 (2010) where the breakup channel was considered in detail. The theoretical formulation used in this paper is very closely related to the formalism introduced in Eur. Phys. J. A 43, 339 (2010) and Phys. Rev. C 68, 054003 (2003), however, we work directly with the matrix representation of operators in the joined isospin-spin space of the three-nucleon system and use only the driving term of the three-nucleon Faddeev equations. This greatly simplifies the numerical realization of the calculation and allows us to consider also the elastic channel of the reaction.
Constraint-preserving boundary conditions in the 3+1 first-order approach
Bona, C.; Bona-Casas, C.
2010-09-15
A set of energy-momentum constraint-preserving boundary conditions is proposed for the first-order Z4 case. The stability of a simple numerical implementation is tested in the linear regime (robust stability test), both with the standard corner and vertex treatment and with a modified finite-differences stencil for boundary points which avoids corners and vertices even in Cartesian-like grids. Moreover, the proposed boundary conditions are tested in a strong-field scenario, the Gowdy waves metric, showing the expected rate of convergence. The accumulated amount of energy-momentum constraint violations is similar or even smaller than the one generated by either periodic or reflection conditions, which are exact in the Gowdy waves case. As a side theoretical result, a new symmetrizer is explicitly given, which extends the parametric domain of symmetric hyperbolicity for the Z4 formalism. The application of these results to first-order Baumgarte-Shapiro-Shibata-Nakamura-like formalisms is also considered.
A first-order time-domain Green's function approach to supersonic unsteady flow
NASA Technical Reports Server (NTRS)
Freedman, M. I.; Tseng, K.
1985-01-01
A time-domain Green's Function Method for unsteady supersonic potential flow around complex aircraft configurations is presented. The focus is on the supersonic range wherein the linear potential flow assumption is valid. The Green's function method is employed in order to convert the potential-flow differential equation into an integral one. This integral equation is then discretized, in space through standard finite-element technique, and in time through finite-difference, to yield a linear algebraic system of equations relating the unknown potential to its prescribed co-normalwash (boundary condition) on the surface of the aircraft. The arbitrary complex aircraft configuration is discretized into hyperboloidal (twisted quadrilateral) panels. The potential and co-normalwash are assumed to vary linearly within each panel. Consistent with the spatial linear (first-order) finite-element approximations, the potential and co-normalwash are assumed to vary linearly in time. The long range goal of our research is to develop a comprehensive theory for unsteady supersonic potential aerodynamics which is capable of yielding accurate results even in the low supersonic (i.e., high transonic) range.
Combinatorial approach to generalized Bell and Stirling numbers and boson normal ordering problem
Mendez, M.A.; Blasiak, P.; Penson, K.A.
2005-08-01
We consider the numbers arising in the problem of normal ordering of expressions in boson creation a{sup {dagger}} and annihilation a operators ([a,a{sup {dagger}}]=1). We treat a general form of a boson string (a{sup {dagger}}){sup r{sub n}}a{sup s{sub n}}...(a{sup {dagger}}){sup r{sub 2}}a{sup s{sub 2}}(a{sup {dagger}}){sup r{sub 1}}a{sup s{sub 1}} which is shown to be associated with generalizations of Stirling and Bell numbers. The recurrence relations and closed-form expressions (Dobinski-type formulas) are obtained for these quantities by both algebraic and combinatorial methods. By extensive use of methods of combinatorial analysis we prove the equivalence of the aforementioned problem to the enumeration of special families of graphs. This link provides a combinatorial interpretation of the numbers arising in this normal ordering problem.
Self-Organizing Hidden Markov Model Map (SOHMMM).
Ferles, Christos; Stafylopatis, Andreas
2013-12-01
A hybrid approach combining the Self-Organizing Map (SOM) and the Hidden Markov Model (HMM) is presented. The Self-Organizing Hidden Markov Model Map (SOHMMM) establishes a cross-section between the theoretic foundations and algorithmic realizations of its constituents. The respective architectures and learning methodologies are fused in an attempt to meet the increasing requirements imposed by the properties of deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and protein chain molecules. The fusion and synergy of the SOM unsupervised training and the HMM dynamic programming algorithms bring forth a novel on-line gradient descent unsupervised learning algorithm, which is fully integrated into the SOHMMM. Since the SOHMMM carries out probabilistic sequence analysis with little or no prior knowledge, it can have a variety of applications in clustering, dimensionality reduction and visualization of large-scale sequence spaces, and also, in sequence discrimination, search and classification. Two series of experiments based on artificial sequence data and splice junction gene sequences demonstrate the SOHMMM's characteristics and capabilities. PMID:24001407
Cleveland, Sean B.; Davies, John; McClure, Marcella A.
2011-01-01
The goal of this Bioinformatic study is to investigate sequence conservation in relation to evolutionary function/structure of the nucleoprotein of the order Mononegavirales. In the combined analysis of 63 representative nucleoprotein (N) sequences from four viral families (Bornaviridae, Filoviridae, Rhabdoviridae, and Paramyxoviridae) we predict the regions of protein disorder, intra-residue contact and co-evolving residues. Correlations between location and conservation of predicted regions illustrate a strong division between families while high- lighting conservation within individual families. These results suggest the conserved regions among the nucleoproteins, specifically within Rhabdoviridae and Paramyxoviradae, but also generally among all members of the order, reflect an evolutionary advantage in maintaining these sites for the viral nucleoprotein as part of the transcription/replication machinery. Results indicate conservation for disorder in the C-terminus region of the representative proteins that is important for interacting with the phosphoprotein and the large subunit polymerase during transcription and replication. Additionally, the C-terminus region of the protein preceding the disordered region, is predicted to be important for interacting with the encapsidated genome. Portions of the N-terminus are responsible for N∶N stability and interactions identified by the presence or lack of co-evolving intra-protein contact predictions. The validation of these prediction results by current structural information illustrates the benefits of the Disorder, Intra-residue contact and Compensatory mutation Correlator (DisICC) pipeline as a method for quickly characterizing proteins and providing the most likely residues and regions necessary to target for disruption in viruses that have little structural information available. PMID:21559282
Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks
NASA Astrophysics Data System (ADS)
Johnson, Joseph
2016-03-01
We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called ``metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a ``supernet'' of all numerical information supporting new initiatives in AI.
Phase transition ordering-separation: A new approach to heat treatment of alloys
NASA Astrophysics Data System (ADS)
Ustinovshchikov, Yu. I.
2015-09-01
The problems of the consequence of heat treatment of alloys performed using the concept of an ordering-separation phase transition are considered. Fe50Cr50 and Ni88Al12 alloys and U13 steel are used as examples to show that this transition occurs at a temperature specific for each system, and a change in the sign of the chemical interaction between alloy component atoms changes the direction of diffusion fluxes in alloys into the opposite direction, which changes the type of microstructure. The detection of this phase transition radically changes the generally accepted concepts of heat treatment of alloys. This finding calls for transmission electron microscopy investigations to modify the phase diagrams where this phase transition was detected. It is concluded that quenching of alloys from a so-called solid-solution field, which is usually performed before tempering (aging), is an unnecessary and useless operation, since the final structure of an alloy forms upon tempering (aging) irrespective of the structure existing before this heat treatment.
First-order irreversible thermodynamic approach to a nonsteady RLC circuit as an energy converter
NASA Astrophysics Data System (ADS)
Valencia, G.; Arias, L. A.
2015-01-01
In this work we show a RLC-circuit as energy converter within the context of first-order irreversible thermodynamics (FOIT). For our analysis, we propose an isothermic model with transient elements and passive elements. With the help of the dynamic equations, the Kirchhoff equations, we found the generalized fluxes and forces of the circuit, the equation system shows symmetry of the cross terms, this property is characteristic of the steady state linear systems, but in this case phenomenological coefficients are function of time. Then, we can use these relations, similar to the linear Onsager relations, to construct the characteristic functions of the RLC energy converter: the power output, efficiency, dissipation and ecological function, and study its energetic performance. The study of performance of the converter is based on two parameters, the coupling parameter and the "forces ratio" parameter, in this case as functions of time. We find that the behavior of the non-steady state converter is similar to the behavior of steady state energy converter. We will explain the linear and symmetric behavior of the converter in the frequencies space rather than in the time space. Finally, we establish optimal operation regimes of economic degree of coupling for this energy converter.
A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences
Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen
2015-01-01
Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method. PMID:26161131
Gravitational-wave implications for structure formation: A second-order approach
NASA Astrophysics Data System (ADS)
Pazouli, Despoina; Tsagas, Christos G.
2016-03-01
Gravitational waves are propagating undulations in the spacetime fabric, which interact very weakly with their environment. In cosmology, gravitational-wave distortions are produced by most of the inflationary scenarios and their anticipated detection should open a new window to the early Universe. Motivated by the relative lack of studies on the potential implications of gravitational radiation for the large-scale structure of the Universe, we consider its coupling to density perturbations during the postrecombination era. We do so by assuming an Einstein-de Sitter background cosmology and by employing a second-order perturbation study. At this perturbative level and on superhorizon scales, we find that gravitational radiation adds a distinct and faster growing mode to the standard linear solution for the density contrast. Given the expected weakness of cosmological gravitational waves, however, the effect of the new mode is currently subdominant and it could start becoming noticeable only in the far future. Nevertheless, this still raises the intriguing possibility that the late-time evolution of large-scale density perturbations may be dictated by the long-range (the Weyl), rather than the local (the Ricci) component of the gravitational field.
A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.
Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen
2015-01-01
Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method. PMID:26161131
Fractional-order elastic models of cartilage: A multi-scale approach
NASA Astrophysics Data System (ADS)
Magin, Richard L.; Royston, Thomas J.
2010-03-01
The objective of this research is to develop new quantitative methods to describe the elastic properties (e.g., shear modulus, viscosity) of biological tissues such as cartilage. Cartilage is a connective tissue that provides the lining for most of the joints in the body. Tissue histology of cartilage reveals a multi-scale architecture that spans a wide range from individual collagen and proteoglycan molecules to families of twisted macromolecular fibers and fibrils, and finally to a network of cells and extracellular matrix that form layers in the connective tissue. The principal cells in cartilage are chondrocytes that function at the microscopic scale by creating nano-scale networks of proteins whose biomechanical properties are ultimately expressed at the macroscopic scale in the tissue's viscoelasticity. The challenge for the bioengineer is to develop multi-scale modeling tools that predict the three-dimensional macro-scale mechanical performance of cartilage from micro-scale models. Magnetic resonance imaging (MRI) and MR elastography (MRE) provide a basis for developing such models based on the nondestructive biomechanical assessment of cartilage in vitro and in vivo. This approach, for example, uses MRI to visualize developing proto-cartilage structure, MRE to characterize the shear modulus of such structures, and fractional calculus to describe the dynamic behavior. Such models can be extended using hysteresis modeling to account for the non-linear nature of the tissue. These techniques extend the existing computational methods to predict stiffness and strength, to assess short versus long term load response, and to measure static versus dynamic response to mechanical loads over a wide range of frequencies (50-1500 Hz). In the future, such methods can perhaps be used to help identify early changes in regenerative connective tissue at the microscopic scale and to enable more effective diagnostic monitoring of the onset of disease.
Large deviations for Markov processes with resetting.
Meylahn, Janusz M; Sabhapandit, Sanjib; Touchette, Hugo
2015-12-01
Markov processes restarted or reset at random times to a fixed state or region in space have been actively studied recently in connection with random searches, foraging, and population dynamics. Here we study the large deviations of time-additive functions or observables of Markov processes with resetting. By deriving a renewal formula linking generating functions with and without resetting, we are able to obtain the rate function of such observables, characterizing the likelihood of their fluctuations in the long-time limit. We consider as an illustration the large deviations of the area of the Ornstein-Uhlenbeck process with resetting. Other applications involving diffusions, random walks, and jump processes with resetting or catastrophes are discussed. PMID:26764673
Equivalent Markov processes under gauge group
NASA Astrophysics Data System (ADS)
Caruso, M.; Jarne, C.
2015-11-01
We have studied Markov processes on denumerable state space and continuous time. We found that all these processes are connected via gauge transformations. We have used this result before as a method to resolve equations, included the case in a previous work in which the sample space is time-dependent [Phys. Rev. E 90, 022125 (2014), 10.1103/PhysRevE.90.022125]. We found a general solution through dilation of the state space, although the prior probability distribution of the states defined in this new space takes smaller values with respect to that in the initial problem. The gauge (local) group of dilations modifies the distribution on the dilated space to restore the original process. In this work, we show how the Markov process in general could be linked via gauge (local) transformations, and we present some illustrative examples for this result.
Equivalent Markov processes under gauge group.
Caruso, M; Jarne, C
2015-11-01
We have studied Markov processes on denumerable state space and continuous time. We found that all these processes are connected via gauge transformations. We have used this result before as a method to resolve equations, included the case in a previous work in which the sample space is time-dependent [Phys. Rev. E 90, 022125 (2014)]. We found a general solution through dilation of the state space, although the prior probability distribution of the states defined in this new space takes smaller values with respect to that in the initial problem. The gauge (local) group of dilations modifies the distribution on the dilated space to restore the original process. In this work, we show how the Markov process in general could be linked via gauge (local) transformations, and we present some illustrative examples for this result. PMID:26651671
Markov-CA model using analytical hierarchy process and multiregression technique
NASA Astrophysics Data System (ADS)
Omar, N. Q.; Sanusi, S. A. M.; Hussin, W. M. W.; Samat, N.; Mohammed, K. S.
2014-06-01
The unprecedented increase in population and rapid rate of urbanisation has led to extensive land use changes. Cellular automata (CA) are increasingly used to simulate a variety of urban dynamics. This paper introduces a new CA based on an integration model built-in multi regression and multi-criteria evaluation to improve the representation of CA transition rule. This multi-criteria evaluation is implemented by utilising data relating to the environmental and socioeconomic factors in the study area in order to produce suitability maps (SMs) using an analytical hierarchical process, which is a well-known method. Before being integrated to generate suitability maps for the periods from 1984 to 2010 based on the different decision makings, which have become conditioned for the next step of CA generation. The suitability maps are compared in order to find the best maps based on the values of the root equation (R2). This comparison can help the stakeholders make better decisions. Thus, the resultant suitability map derives a predefined transition rule for the last step for CA model. The approach used in this study highlights a mechanism for monitoring and evaluating land-use and land-cover changes in Kirkuk city, Iraq owing changes in the structures of governments, wars, and an economic blockade over the past decades. The present study asserts the high applicability and flexibility of Markov-CA model. The results have shown that the model and its interrelated concepts are performing rather well.
Accurate Estimation of Protein Folding and Unfolding Times: Beyond Markov State Models.
Suárez, Ernesto; Adelman, Joshua L; Zuckerman, Daniel M
2016-08-01
Because standard molecular dynamics (MD) simulations are unable to access time scales of interest in complex biomolecular systems, it is common to "stitch together" information from multiple shorter trajectories using approximate Markov state model (MSM) analysis. However, MSMs may require significant tuning and can yield biased results. Here, by analyzing some of the longest protein MD data sets available (>100 μs per protein), we show that estimators constructed based on exact non-Markovian (NM) principles can yield significantly improved mean first-passage times (MFPTs) for protein folding and unfolding. In some cases, MSM bias of more than an order of magnitude can be corrected when identical trajectory data are reanalyzed by non-Markovian approaches. The NM analysis includes "history" information, higher order time correlations compared to MSMs, that is available in every MD trajectory. The NM strategy is insensitive to fine details of the states used and works well when a fine time-discretization (i.e., small "lag time") is used. PMID:27340835
A Bayesian method for construction of Markov models to describe dynamics on various time-scales
NASA Astrophysics Data System (ADS)
Rains, Emily K.; Andersen, Hans C.
2010-10-01
The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an NP×NP transition rate matrix for transitions between the mesostates in one mesoscopic time step, where NP is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most probable
Antipersistent Markov behavior in foreign exchange markets
NASA Astrophysics Data System (ADS)
Baviera, Roberto; Pasquini, Michele; Serva, Maurizio; Vergni, Davide; Vulpiani, Angelo
2002-09-01
A quantitative check of efficiency in US dollar/Deutsche mark exchange rates is developed using high-frequency (tick by tick) data. The antipersistent Markov behavior of log-price fluctuations of given size implies, in principle, the possibility of a statistical forecast. We introduce and measure the available information of the quote sequence, and we show how it can be profitable following a particular trading rule.
Programs Help Create And Evaluate Markov Models
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Pade Approximation With Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) computer programs provide flexible, user-friendly, language-based interface for creation and evaluation of Markov models describing behaviors of fault-tolerant reconfigurable computer systems. Produce exact solution for probabilities of system failures and provide conservative estimates of numbers of significant digits in solutions. Also offer as part of bundled package with SURE and ASSIST, two other reliable analysis programs developed by Systems Validation Methods group at Langley Research Center.
Numerical methods in Markov chain modeling
NASA Technical Reports Server (NTRS)
Philippe, Bernard; Saad, Youcef; Stewart, William J.
1989-01-01
Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.
The cutoff phenomenon in finite Markov chains.
Diaconis, P
1996-01-01
Natural mixing processes modeled by Markov chains often show a sharp cutoff in their convergence to long-time behavior. This paper presents problems where the cutoff can be proved (card shuffling, the Ehrenfests' urn). It shows that chains with polynomial growth (drunkard's walk) do not show cutoffs. The best general understanding of such cutoffs (high multiplicity of second eigenvalues due to symmetry) is explored. Examples are given where the symmetry is broken but the cutoff phenomenon persists. PMID:11607633
NASA Astrophysics Data System (ADS)
Li, Xuesong; Northrop, William F.
2016-04-01
This paper describes a quantitative approach to approximate multiple scattering through an isotropic turbid slab based on Markov Chain theorem. There is an increasing need to utilize multiple scattering for optical diagnostic purposes; however, existing methods are either inaccurate or computationally expensive. Here, we develop a novel Markov Chain approximation approach to solve multiple scattering angular distribution (AD) that can accurately calculate AD while significantly reducing computational cost compared to Monte Carlo simulation. We expect this work to stimulate ongoing multiple scattering research and deterministic reconstruction algorithm development with AD measurements.
Moldenhauer, Jacob; Ishak, Mustapha E-mail: mishak@utdallas.edu
2009-12-01
We compare higher order gravity models to observational constraints from magnitude -redshift supernova data, distance to the last scattering surface of the CMB, and Baryon Acoustic Oscillations. We follow a recently proposed systematic approach to higher order gravity models based on minimal sets of curvature invariants, and select models that pass some physical acceptability conditions (free of ghost instabilities, real and positive propagation speeds, and free of separatrices). Models that satisfy these physical and observational constraints are found in this analysis and do provide fits to the data that are very close to those of the LCDM concordance model. However, we find in that the limitation of the models considered here comes from the presence of superluminal mode propagations for the constrained parameter space of the models.
From empirical data to time-inhomogeneous continuous Markov processes
NASA Astrophysics Data System (ADS)
Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G.
2016-03-01
We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60 % of tested matrices, typically 80 % to 90 % , and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.
Markov Chain Monte Carlo Bayesian Learning for Neural Networks
NASA Technical Reports Server (NTRS)
Goodrich, Michael S.
2011-01-01
Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.
From empirical data to time-inhomogeneous continuous Markov processes.
Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G
2016-03-01
We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed. PMID:27078320
Active Inference for Binary Symmetric Hidden Markov Models
NASA Astrophysics Data System (ADS)
Allahverdyan, Armen E.; Galstyan, Aram
2015-10-01
We consider active maximum a posteriori (MAP) inference problem for hidden Markov models (HMM), where, given an initial MAP estimate of the hidden sequence, we select to label certain states in the sequence to improve the estimation accuracy of the remaining states. We focus on the binary symmetric HMM, and employ its known mapping to 1d Ising model in random fields. From the statistical physics viewpoint, the active MAP inference problem reduces to analyzing the ground state of the 1d Ising model under modified external fields. We develop an analytical approach and obtain a closed form solution that relates the expected error reduction to model parameters under the specified active inference scheme. We then use this solution to determine most optimal active inference scheme in terms of error reduction, and examine the relation of those schemes to heuristic principles of uncertainty reduction and solution unicity.
Inferring phenomenological models of Markov processes from data
NASA Astrophysics Data System (ADS)
Rivera, Catalina; Nemenman, Ilya
Microscopically accurate modeling of stochastic dynamics of biochemical networks is hard due to the extremely high dimensionality of the state space of such networks. Here we propose an algorithm for inference of phenomenological, coarse-grained models of Markov processes describing the network dynamics directly from data, without the intermediate step of microscopically accurate modeling. The approach relies on the linear nature of the Chemical Master Equation and uses Bayesian Model Selection for identification of parsimonious models that fit the data. When applied to synthetic data from the Kinetic Proofreading process (KPR), a common mechanism used by cells for increasing specificity of molecular assembly, the algorithm successfully uncovers the known coarse-grained description of the process. This phenomenological description has been notice previously, but this time it is derived in an automated manner by the algorithm. James S. McDonnell Foundation Grant No. 220020321.
Combining Wavelet Transform and Hidden Markov Models for ECG Segmentation
NASA Astrophysics Data System (ADS)
Andreão, Rodrigo Varejão; Boudy, Jérôme
2006-12-01
This work aims at providing new insights on the electrocardiogram (ECG) segmentation problem using wavelets. The wavelet transform has been originally combined with a hidden Markov models (HMMs) framework in order to carry out beat segmentation and classification. A group of five continuous wavelet functions commonly used in ECG analysis has been implemented and compared using the same framework. All experiments were realized on the QT database, which is composed of a representative number of ambulatory recordings of several individuals and is supplied with manual labels made by a physician. Our main contribution relies on the consistent set of experiments performed. Moreover, the results obtained in terms of beat segmentation and premature ventricular beat (PVC) detection are comparable to others works reported in the literature, independently of the type of the wavelet. Finally, through an original concept of combining two wavelet functions in the segmentation stage, we achieve our best performances.
Reduction Of Sizes Of Semi-Markov Reliability Models
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Dan L.
1995-01-01
Trimming technique reduces computational effort by order of magnitude while introducing negligible error. Error bound depends on only three parameters from semi-Markov model: maximum sum of rates for failure transitions leaving any state, maximum average holding time for recovery-mode state, and operating time for system. Error bound computed before any model generated, enabling modeler to decide immediately whether or not model can be trimmed. Trimming procedure specified by precise and easy description, making it easy to include trimming procedure in program generating mathematical models for use in assessing reliability. Typical application of technique in design of digital control systems required to be extremely reliable. In addition to aerospace applications, fault-tolerant design has growing importance in wide range of industrial applications.
CAUSAL MARKOV RANDOM FIELD FOR BRAIN MR IMAGE SEGMENTATION
Razlighi, Qolamreza R.; Orekhov, Aleksey; Laine, Andrew; Stern, Yaakov
2013-01-01
We propose a new Bayesian classifier, based on the recently introduced causal Markov random field (MRF) model, Quadrilateral MRF (QMRF). We use a second order inhomogeneous anisotropic QMRF to model the prior and likelihood probabilities in the maximum a posteriori (MAP) classifier, named here as MAP-QMRF. The joint distribution of QMRF is given in terms of the product of two dimensional clique distributions existing in its neighboring structure. 20 manually labeled human brain MR images are used to train and assess the MAP-QMRF classifier using the jackknife validation method. Comparing the results of the proposed classifier and FreeSurfer on the Dice overlap measure shows an average gain of 1.8%. We have performed a power analysis to demonstrate that this increase in segmentation accuracy substantially reduces the number of samples required to detect a 5% change in volume of a brain region. PMID:23366607
Comparison of glycosyltransferase families using the profile hidden Markov model.
Kikuchi, Norihiro; Kwon, Yeon-Dae; Gotoh, Masanori; Narimatsu, Hisashi
2003-10-17
In order to investigate the relationship between glycosyltransferase families and the motif for them, we classified 47 glycosyltransferase families in the CAZy database into four superfamilies, GTS-A, -B, -C, and -D, using a profile Hidden Markov Model method. On the basis of the classification and the similarity between GTS-A and nucleotidylyltransferase family catalyzing the synthesis of nucleotide-sugar, we proposed that ancient oligosaccharide might have been synthesized by the origin of GTS-B whereas the origin of GTS-A might be the gene encoding for synthesis of nucleotide-sugar as the donor and have evolved to glycosyltransferases to catalyze the synthesis of divergent carbohydrates. We also suggested that the divergent evolution of each superfamily in the corresponding subcellular component has increased the complexities of eukaryotic carbohydrate structure. PMID:14521949
A hidden Markov model for space-time precipitation
Zucchini, W. ); Guttorp, P. )
1991-08-01
Stochastic models for precipitation events in space and time over mesoscale spatial areas have important applications in hydrology, both as input to runoff models and as parts of general circulation models (GCMs) of global climate. A family of multivariate models for the occurrence/nonoccurrence of precipitation at N sites is constructed by assuming a different probability of events at the sites for each of a number of unobservable climate states. The climate process is assumed to follow a Markov chain. Simple formulae for first- and second-order parameter functions are derived, and used to find starting values for a numerical maximization of the likelihood. The method is illustrated by applying it to data for one site in Washington and to data for a network in the Great plains.
NASA Astrophysics Data System (ADS)
Hong, Liang; Liu, Cun; Yang, Kun; Deng, Ming
2013-07-01
The high resolution satellite imagery (HRSI) have higher spatial resolution and less spectrum number, so there are some "object with different spectra, different objects with same spectrum" phenomena. The objective of this paper is to utilize the extracted features of high resolution satellite imagery (HRSI) obtained by the wavelet transform(WT) for segmentation. WT provides the spatial and spectral characteristics of a pixel along with its neighbors. The object-oriented Markov random Model in the wavelet domain is proposed in order to segment high resolution satellite imagery (HRSI). The proposed method is made up of three blocks: (1) WT-based feature extrcation.the aim of extraction of feature using WT for original spectral bands is to exploit the spatial and frequency information of the pixels; (2) over-segmentation object generation. Mean-Shift algorithm is employed to obtain over-segmentation objects; (3) classification based on Object-oriented Markov Random Model. Firstly the object adjacent graph (OAG) can be constructed on the over-segmentation objects. Secondly MRF model is easily defined on the OAG, in which WT-based feature of pixels are modeled in the feature field model and the neighbor system, potential cliques and energy functions of OAG are exploited in the labeling model. Experiments are conducted on one HRSI dataset-QuickBird images. We evaluate and compare the proposed approach with the well-known commercial software eCognition(object-based analysis approach) and Maximum Likelihood(ML) based pixels. Experimental results show that the proposed the method in this paper obviously outperforms the other methods.
NASA Astrophysics Data System (ADS)
Rehmat, Abeera Parvaiz
As we progress into the 21st century, higher-order thinking skills and achievement in science and math are essential to meet the educational requirement of STEM careers. Educators need to think of innovative ways to engage and prepare students for current and future challenges while cultivating an interest among students in STEM disciplines. An instructional pedagogy that can capture students' attention, support interdisciplinary STEM practices, and foster higher-order thinking skills is problem-based learning. Problem-based learning embedded in the social constructivist view of teaching and learning (Savery & Duffy, 1995) promotes self-regulated learning that is enhanced through exploration, cooperative social activity, and discourse (Fosnot, 1996). This quasi-experimental mixed methods study was conducted with 98 fourth grade students. The study utilized STEM content assessments, a standardized critical thinking test, STEM attitude survey, PBL questionnaire, and field notes from classroom observations to investigate the impact of problem-based learning on students' content knowledge, critical thinking, and their attitude towards STEM. Subsequently, it explored students' experiences of STEM integration in a PBL environment. The quantitative results revealed a significant difference between groups in regards to their content knowledge, critical thinking skills, and STEM attitude. From the qualitative results, three themes emerged: learning approaches, increased interaction, and design and engineering implementation. From the overall data set, students described the PBL environment to be highly interactive that prompted them to employ multiple approaches, including design and engineering to solve the problem.
Threshold partitioning of sparse matrices and applications to Markov chains
Choi, Hwajeong; Szyld, D.B.
1996-12-31
It is well known that the order of the variables and equations of a large, sparse linear system influences the performance of classical iterative methods. In particular if, after a symmetric permutation, the blocks in the diagonal have more nonzeros, classical block methods have a faster asymptotic rate of convergence. In this paper, different ordering and partitioning algorithms for sparse matrices are presented. They are modifications of PABLO. In the new algorithms, in addition to the location of the nonzeros, the values of the entries are taken into account. The matrix resulting after the symmetric permutation has dense blocks along the diagonal, and small entries in the off-diagonal blocks. Parameters can be easily adjusted to obtain, for example, denser blocks, or blocks with elements of larger magnitude. In particular, when the matrices represent Markov chains, the permuted matrices are well suited for block iterative methods that find the corresponding probability distribution. Applications to three types of methods are explored: (1) Classical block methods, such as Block Gauss Seidel. (2) Preconditioned GMRES, where a block diagonal preconditioner is used. (3) Iterative aggregation method (also called aggregation/disaggregation) where the partition obtained from the ordering algorithm with certain parameters is used as an aggregation scheme. In all three cases, experiments are presented which illustrate the performance of the methods with the new orderings. The complexity of the new algorithms is linear in the number of nonzeros and the order of the matrix, and thus adding little computational effort to the overall solution.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Harter, Thomas; Sivakumar, Bellie
2006-06-01
examined, the third moment of the traveltime pdf varies from negatively skewed to strongly positively skewed. We also show that the Markov chain approach may give significantly different traveltime distributions when compared to the more commonly used Gaussian random field approach, even when the first- and second-order moments in the geostatistical distribution of the lnK field are identical. The choice of the appropriate geostatistical model is therefore critical in the assessment of nonpoint source transport, and uncertainty about that choice must be considered in evaluating the results.
NASA Astrophysics Data System (ADS)
Amir Rahmani, Mohammad; Zarghami, Mahdi
2013-03-01
The projections of the climate change by using General Climate Models (GCMs) are uncertain. Hence, combining the results of GCMs is now an effective solution to tackle this uncertainty. To evaluate the performance of GCMs, a new measure based on the similarity of the projections is defined. In defining this measure the Ordered Weighted Averaging (OWA) approach is used. The relative weights of the GCMs projections in different stations, to be aggregated by the OWA operator, are obtained by regular increasing monotone fuzzy quantifiers, which model the risk preferences of the decision maker. To show the effectiveness of the approach, climate change in the northwestern provinces of Iran is studied by using the data of 15 synoptic stations. The weather generator of LARS-WG is used to downscale the GCMs under three emission scenarios (A2, A1B and B1) for the period 2011 to 2030. The combined results, by using the similarity values, indicate a - 0.1 °C to + 4.5 °C change in temperature in the region. Precipitation is expected to increase in summer and fall. Changes in wintry precipitation depend on the location; however the precipitation in spring would have a medium change. The results of this study show the usefulness of OWA operator, which considers the risk attitudes of the decision maker. This approach could help water and environmental managers to tackle the climate uncertainties.
Markov Chain Analysis of Musical Dice Games
NASA Astrophysics Data System (ADS)
Volchenkov, D.; Dawin, J. R.
2012-07-01
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
Hidden Markov models for stochastic thermodynamics
NASA Astrophysics Data System (ADS)
Bechhoefer, John
2015-07-01
The formalism of state estimation and hidden Markov models can simplify and clarify the discussion of stochastic thermodynamics in the presence of feedback and measurement errors. After reviewing the basic formalism, we use it to shed light on a recent discussion of phase transitions in the optimized response of an information engine, for which measurement noise serves as a control parameter. The HMM formalism also shows that the value of additional information displays a maximum at intermediate signal-to-noise ratios. Finally, we discuss how systems open to information flow can apparently violate causality; the HMM formalism can quantify the performance gains due to such violations.
Hybrid Discrete-Continuous Markov Decision Processes
NASA Technical Reports Server (NTRS)
Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich
2003-01-01
This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.
Lamiable, A; Thevenet, P; Tufféry, P
2016-08-01
Hidden Markov Model derived structural alphabets are a probabilistic framework in which the complete conformational space of a peptidic chain is described in terms of probability distributions that can be sampled to identify conformations of largest probabilities. Here, we assess how three strategies to sample sub-optimal conformations-Viterbi k-best, forward backtrack and a taboo sampling approach-can lead to the efficient generation of peptide conformations. We show that the diversity of sampling is essential to compensate biases introduced in the estimates of the probabilities, and we find that only the forward backtrack and a taboo sampling strategies can efficiently generate native or near-native models. Finally, we also find such approaches are as efficient as former protocols, while being one order of magnitude faster, opening the door to the large scale de novo modeling of peptides and mini-proteins. © 2016 Wiley Periodicals, Inc. PMID:27317417
Robust extended dissipative control for sampled-data Markov jump systems
NASA Astrophysics Data System (ADS)
Shen, Hao; Park, Ju H.; Zhang, Lixian; Wu, Zheng-Guang
2014-08-01
This paper investigates the problem of the sampled-data extended dissipative control for uncertain Markov jump systems. The systems considered are transformed into Markov jump systems with polytopic uncertainties and sawtooth delays by using an input delay approach. The focus is on the design of a mode-independent sampled-data controller such that the resulting closed-loop system is mean-square exponentially stable with a given decay rate and extended dissipative. A novel exponential stability criterion and an extended dissipativty condition are established by proposing a new integral inequality. The reduced conservatism of the criteria is demonstrated by two numerical examples. Furthermore, a sufficient condition for the existence of a desired mode-independent sampled-data controller is obtained by solving a convex optimisation problem. Finally, a resistance, inductance and capacitance (RLC) series circuit is employed to illustrate the effectiveness of the proposed approach.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Hey, Jody; Nielsen, Rasmus
2007-02-20
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
NASA Astrophysics Data System (ADS)
Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant
2015-07-01
Linear pyroelectric array sensors have enabled useful classifications of objects such as humans and animals to be performed with relatively low-cost hardware in border and perimeter security applications. Ongoing research has sought to improve the performance of these sensors through signal processing algorithms. In the research presented here, we introduce the use of hidden Markov tree (HMT) models for object recognition in images generated by linear pyroelectric sensors. HMTs are trained to statistically model the wavelet features of individual objects through an expectation-maximization learning process. Human versus animal classification for a test object is made by evaluating its wavelet features against the trained HMTs using the maximum-likelihood criterion. The classification performance of this approach is compared to two other techniques; a texture, shape, and spectral component features (TSSF) based classifier and a speeded-up robust feature (SURF) classifier. The evaluation indicates that among the three techniques, the wavelet-based HMT model works well, is robust, and has improved classification performance compared to a SURF-based algorithm in equivalent computation time. When compared to the TSSF-based classifier, the HMT model has a slightly degraded performance but almost an order of magnitude improvement in computation time enabling real-time implementation.
A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.
Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing
2015-01-01
Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing. PMID:26710073
Prediction coefficient estimation in Markov random fields for iterative x-ray CT reconstruction
NASA Astrophysics Data System (ADS)
Wang, Jiao; Sauer, Ken; Thibault, Jean-Baptiste; Yu, Zhou; Bouman, Charles
2012-02-01
Bayesian estimation is a statistical approach for incorporating prior information through the choice of an a priori distribution for a random field. A priori image models in Bayesian image estimation are typically low-order Markov random fields (MRFs), effectively penalizing only differences among immediately neighboring voxels. This limits spectral description to a crude low-pass model. For applications where more flexibility in spectral response is desired, potential benefit exists in models which accord higher a priori probability to content in higher frequencies. Our research explores the potential of larger neighborhoods in MRFs to raise the number of degrees of freedom in spectral description. Similarly to classical filter design, the MRF coefficients may be chosen to yield a desired pass-band/stop-band characteristic shape in the a priori model of the images. In this paper, we present an alternative design method, where high-quality sample images are used to estimate the MRF coefficients by fitting them into the spatial correlation of the given ensemble. This method allows us to choose weights that increase the probability of occurrence of strong components at particular spatial frequencies. This allows direct adaptation of the MRFs for different tissue types based on sample images with different frequency content. In this paper, we consider particularly the preservation of detail in bone structure in X-ray CT. Our results show that MRF design can be used to obtain bone emphasis similar to that of conventional filtered back-projection (FBP) with a bone kernel.
A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data
Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing
2015-01-01
Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing. PMID:26710073
Discovering short linear protein motif based on selective training of profile hidden Markov models.
Song, Tao; Gu, Hong
2015-07-21
Short linear motifs (SLiMs) in proteins are relatively conservative sequence patterns within disordered regions of proteins, typically 3-10 amino acids in length. They play an important role in mediating protein-protein interactions. Discovering SLiMs by computational methods has attracted more and more attention, most of which were based on regular expressions and profiles. In this paper, a de novo motif discovery method was proposed based on profile hidden Markov models (HMMs), which can not only provide the emission probabilities of amino acids in the defined positions of SLiMs, but also model the undefined positions. We adopted the ordered region masking and the relative local conservation (RLC) masking to improve the signal to noise ratio of the query sequences while applying evolutionary weighting to make the important sequences in evolutionary process get more attention by the selective training of profile HMMs. The experimental results show that our method and the profile-based method returned different subsets within a SLiMs dataset, and the performance of the two approaches are equivalent on a more realistic discovery dataset. Profile HMM-based motif discovery methods complement the existing methods and provide another way for SLiMs analysis. PMID:25791288
Generalised nonlinear l2-l∞ filtering of discrete-time Markov jump descriptor systems
NASA Astrophysics Data System (ADS)
Li, Lin; Zhong, Lei
2014-03-01
This paper is devoted to the l2-l∞ filter design problem for nonlinear discrete-time Markov jump descriptor systems subject to partially unknown transition probabilities. The partially unknown transition probabilities are modelled via the polytopic uncertainties. The objective is to propose a generalised nonlinear full-order filter design method, such that the resulting filtering error system is regular, casual, and stochastically stable, and a prescribed l2-l∞ attenuation level is satisfied. For the autonomous discrete-time descriptor system subject to Lipschitz nonlinear condition, by introducing some slack matrix variables, a mode-dependent stability criterion is established. It cannot only ensure the regularity, casuality, and stochastic stability of system, but also guarantee the considered system has a unique solution. Based on this obtained criterion, a sufficient condition in terms of linear matrix inequalities (LMIs) is derived, such that the resulting filtering error system is regular, casual, stochastically stable while satisfying a given l2-l∞ performance index. Further, the nonlinear mode-dependent l2-l∞ filter design method is proposed, and by solving a set of LMIs, the desired filter gain matrices are also explicitly given. Finally, a numerical example is included to illustrate the effectiveness of our proposed approach.
Protein modeling with hybrid Hidden Markov Model/Neurel network architectures
Baldi, P.; Chauvin, Y.
1995-12-31
Hidden Markov Models (HMMs) are useful in a number of tasks in computational molecular biology, and in particular to model and align protein families. We argue that HMMs are somewhat optimal within a certain modeling hierarchy. Single first order HMMs, however, have two potential limitations: a large number of unstructured parameters, and a built-in inability to deal with long-range dependencies. Hybrid HMM/Neural Network (NN) architectures attempt to overcome these limitations. In hybrid HMM/NN, the HMM parameters are computed by a NN. This provides a reparametrization that allows for flexible control of model complexity, and incorporation of constraints. The approach is tested on the immunoglobulin family. A hybrid model is trained, and a multiple alignment derived, with less than a fourth of the number of parameters used with previous single HMMs. To capture dependencies, however, one must resort to a larger hybrid model class, where the data is modeled by multiple HMMs. The parameters of the HMMs, and their modulation as a function of input or context, is again calculated by a NN.
Extracting duration information in a picture category decoding task using hidden Markov Models
Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y; Schoenfeld, Mircea A; Knight, Robert T; Rose, Georg
2016-01-01
Objective Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain–computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations. PMID:26859831
Extracting duration information in a picture category decoding task using hidden Markov Models
NASA Astrophysics Data System (ADS)
Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg
2016-04-01
Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.
Composition of Web Services Using Markov Decision Processes and Dynamic Programming
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247
Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.
2015-12-01
This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.
Markov Model of Accident Progression at Fukushima Daiichi
Cuadra A.; Bari R.; Cheng, L-Y; Ginsberg, T.; Lehner, J.; Martinez-Guridi, G.; Mubayi, V.; Pratt, T.; Yue, M.
2012-11-11
On March 11, 2011, a magnitude 9.0 earthquake followed by a tsunami caused loss of offsite power and disabled the emergency diesel generators, leading to a prolonged station blackout at the Fukushima Daiichi site. After successful reactor trip for all operating reactors, the inability to remove decay heat over an extended period led to boil-off of the water inventory and fuel uncovery in Units 1-3. A significant amount of metal-water reaction occurred, as evidenced by the quantities of hydrogen generated that led to hydrogen explosions in the auxiliary buildings of the Units 1 & 3, and in the de-fuelled Unit 4. Although it was assumed that extensive fuel damage, including fuel melting, slumping, and relocation was likely to have occurred in the core of the affected reactors, the status of the fuel, vessel, and drywell was uncertain. To understand the possible evolution of the accident conditions at Fukushima Daiichi, a Markov model of the likely state of one of the reactors was constructed and executed under different assumptions regarding system performance and reliability. The Markov approach was selected for several reasons: It is a probabilistic model that provides flexibility in scenario construction and incorporates time dependence of different model states. It also readily allows for sensitivity and uncertainty analyses of different failure and repair rates of cooling systems. While the analysis was motivated by a need to gain insight on the course of events for the damaged units at Fukushima Daiichi, the work reported here provides a more general analytical basis for studying and evaluating severe accident evolution over extended periods of time. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accidents.
NASA Astrophysics Data System (ADS)
Casanova, David
2014-04-01
Second-order corrections to the restricted active space configuration interaction (RASCI) with the hole and particle truncation of the excitation operator are developed. Theoretically, the computational cost of the implemented perturbative approach, abbreviated as RASCI(2), grows like its single reference counterpart in MP2. Two different forms of RASCI(2) have been explored, that is the generalized Davidson-Kapuy and the Epstein-Nesbet partitions of the Hamiltonian. The preliminary results indicate that the use of energy level shift of a few tenths of a Hartree might systematically improve the accuracy of the RASCI(2) energies. The method has been tested in the computation of the ground state energy profiles along the dissociation of the hydrogen fluoride and N2 molecules, the computation of correlation energy in the G2/97 molecular test set, and in the computation of excitation energies to low-lying states in small organic molecules.
Casanova, David
2014-04-14
Second-order corrections to the restricted active space configuration interaction (RASCI) with the hole and particle truncation of the excitation operator are developed. Theoretically, the computational cost of the implemented perturbative approach, abbreviated as RASCI(2), grows like its single reference counterpart in MP2. Two different forms of RASCI(2) have been explored, that is the generalized Davidson-Kapuy and the Epstein-Nesbet partitions of the Hamiltonian. The preliminary results indicate that the use of energy level shift of a few tenths of a Hartree might systematically improve the accuracy of the RASCI(2) energies. The method has been tested in the computation of the ground state energy profiles along the dissociation of the hydrogen fluoride and N{sub 2} molecules, the computation of correlation energy in the G2/97 molecular test set, and in the computation of excitation energies to low-lying states in small organic molecules.
Unmixing hyperspectral images using Markov random fields
Eches, Olivier; Dobigeon, Nicolas; Tourneret, Jean-Yves
2011-03-14
This paper proposes a new spectral unmixing strategy based on the normal compositional model that exploits the spatial correlations between the image pixels. The pure materials (referred to as endmembers) contained in the image are assumed to be available (they can be obtained by using an appropriate endmember extraction algorithm), while the corresponding fractions (referred to as abundances) are estimated by the proposed algorithm. Due to physical constraints, the abundances have to satisfy positivity and sum-to-one constraints. The image is divided into homogeneous distinct regions having the same statistical properties for the abundance coefficients. The spatial dependencies within each class are modeled thanks to Potts-Markov random fields. Within a Bayesian framework, prior distributions for the abundances and the associated hyperparameters are introduced. A reparametrization of the abundance coefficients is proposed to handle the physical constraints (positivity and sum-to-one) inherent to hyperspectral imagery. The parameters (abundances), hyperparameters (abundance mean and variance for each class) and the classification map indicating the classes of all pixels in the image are inferred from the resulting joint posterior distribution. To overcome the complexity of the joint posterior distribution, Markov chain Monte Carlo methods are used to generate samples asymptotically distributed according to the joint posterior of interest. Simulations conducted on synthetic and real data are presented to illustrate the performance of the proposed algorithm.
Sunspots and ENSO relationship using Markov method
NASA Astrophysics Data System (ADS)
Hassan, Danish; Iqbal, Asif; Ahmad Hassan, Syed; Abbas, Shaheen; Ansari, Muhammad Rashid Kamal
2016-01-01
The various techniques have been used to confer the existence of significant relations between the number of Sunspots and different terrestrial climate parameters such as rainfall, temperature, dewdrops, aerosol and ENSO etc. Improved understanding and modelling of Sunspots variations can explore the information about the related variables. This study uses a Markov chain method to find the relations between monthly Sunspots and ENSO data of two epochs (1996-2009 and 1950-2014). Corresponding transition matrices of both data sets appear similar and it is qualitatively evaluated by high values of 2-dimensional correlation found between transition matrices of ENSO and Sunspots. The associated transition diagrams show that each state communicates with the others. Presence of stronger self-communication (between same states) confirms periodic behaviour among the states. Moreover, closeness found in the expected number of visits from one state to the other show the existence of a possible relation between Sunspots and ENSO data. Moreover, perfect validation of dependency and stationary tests endorses the applicability of the Markov chain analyses on Sunspots and ENSO data. This shows that a significant relation between Sunspots and ENSO data exists. Improved understanding and modelling of Sunspots variations can help to explore the information about the related variables. This study can be useful to explore the influence of ENSO related local climatic variability.
Li, Xiaohong; Ni, Siyu; Zhou, Xingping
2015-02-01
The aim of this study is to prepare highly ordered porous anodic alumina (PAA) with large pore sizes (> 200 nm) by an improved two-step anodization approach which combines the first hard anodization in oxalic acid-water-ethanol system and second mild anodization in phosphoric acid-water-ethanol system. The surface morphology and elemental composition of PAA are characterized by field emission scanning electron microscopy (FESEM) and energy-dispersive X-ray spectrometer (EDS). The effects of matching of two-step anodizing voltages on the regularity of pore arrangement is evaluated and discussed. Moreover, the pore formation mechanism is also discussed. The results show that the nanopore arrays on all the PAA samples are in a highly regular arrangement and the pore size is adjustable in the range of 200-300 nm. EDS analysis suggests that the main elements of the as-prepared PAA are oxygen, aluminum and a small amount of phosphorus. Furthermore, the voltage in the first anodization must match well with that in the second anodization, which has significant influence on the PAA regularity. The addition of ethanol to the electrolytes effectively accelerates the diffusion of the heat that evolves from the sample, and decreases the steady current to keep the steady growth of PAA film. The improved two-step anodization approach in this study breaks through the restriction of small pore size in oxalic acid and overcomes the drawbacks of irregular pore morphology in phosphoric acid, and is an efficient way to fabricate large diameter ordered PAA. PMID:26353721
Benchmarking of a Markov multizone model of contaminant transport.
Jones, Rachael M; Nicas, Mark
2014-10-01
A Markov chain model previously applied to the simulation of advection and diffusion process of gaseous contaminants is extended to three-dimensional transport of particulates in indoor environments. The model framework and assumptions are described. The performance of the Markov model is benchmarked against simple conventional models of contaminant transport. The Markov model is able to replicate elutriation predictions of particle deposition with distance from a point source, and the stirred settling of respirable particles. Comparisons with turbulent eddy diffusion models indicate that the Markov model exhibits numerical diffusion in the first seconds after release, but over time accurately predicts mean lateral dispersion. The Markov model exhibits some instability with grid length aspect when turbulence is incorporated by way of the turbulent diffusion coefficient, and advection is present. However, the magnitude of prediction error may be tolerable for some applications and can be avoided by incorporating turbulence by way of fluctuating velocity (e.g. turbulence intensity). PMID:25143517
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
ERIC Educational Resources Information Center
Bartolucci, Francesco; Solis-Trapala, Ivonne L.
2010-01-01
We demonstrate the use of a multidimensional extension of the latent Markov model to analyse data from studies with repeated binary responses in developmental psychology. In particular, we consider an experiment based on a battery of tests which was administered to pre-school children, at three time periods, in order to measure their inhibitory…
Urbina, J A; Moreno, B; Arnold, W; Taron, C H; Orlean, P; Oldfield, E
1998-09-01
We report a simple new nuclear magnetic resonance (NMR) spectroscopic method to investigate order and dynamics in phospholipids in which inter-proton pair order parameters are derived by using high resolution 13C cross-polarization/magic angle spinning (CP/MAS) NMR combined with 1H dipolar echo preparation. The resulting two-dimensional NMR spectra permit determination of the motionally averaged interpair second moment for protons attached to each resolved 13C site, from which the corresponding interpair order parameters can be deducted. A spin-lock mixing pulse before cross-polarization enables the detection of spin diffusion amongst the different regions of the lipid molecules. The method was applied to a variety of model membrane systems, including 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC)/sterol and 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC)/sterol model membranes. The results agree well with previous studies using specifically deuterium labeled or predeuterated phospholipid molecules. It was also found that efficient spin diffusion takes place within the phospholipid acyl chains, and between the glycerol backbone and choline headgroup of these molecules. The experiment was also applied to biosynthetically 13C-labeled ergosterol incorporated into phosphatidylcholine bilayers. These results indicate highly restricted motions of both the sterol nucleus and the aliphatic side chain, and efficient spin exchange between these structurally dissimilar regions of the sterol molecule. Finally, studies were carried out in the lamellar liquid crystalline (L alpha) and inverted hexagonal (HII) phases of 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine (DOPE). These results indicated that phosphatidylethanolamine lamellar phases are more ordered than the equivalent phases of phosphatidylcholines. In the HII (inverted hexagonal) phase, despite the increased translational freedom, there is highly constrained packing of the lipid molecules, particularly in
Non-Markov stochastic processes satisfying equations usually associated with a Markov process
NASA Astrophysics Data System (ADS)
McCauley, J. L.
2012-04-01
There are non-Markov Ito processes that satisfy the Fokker-Planck, backward time Kolmogorov, and Chapman-Kolmogorov equations. These processes are non-Markov in that they may remember an initial condition formed at the start of the ensemble. Some may even admit 1-point densities that satisfy a nonlinear 1-point diffusion equation. However, these processes are linear, the Fokker-Planck equation for the conditional density (the 2-point density) is linear. The memory may be in the drift coefficient (representing a flow), in the diffusion coefficient, or in both. We illustrate the phenomena via exactly solvable examples. In the last section we show how such memory may appear in cooperative phenomena.
Markov and non-Markov processes in complex systems by the dynamical information entropy
NASA Astrophysics Data System (ADS)
Yulmetyev, R. M.; Gafarov, F. M.
1999-12-01
We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
Bayesian comparison of Markov models of molecular dynamics with detailed balance constraint
NASA Astrophysics Data System (ADS)
Bacallado, Sergio; Chodera, John D.; Pande, Vijay
2009-07-01
Discrete-space Markov models are a convenient way of describing the kinetics of biomolecules. The most common strategies used to validate these models employ statistics from simulation data, such as the eigenvalue spectrum of the inferred rate matrix, which are often associated with large uncertainties. Here, we propose a Bayesian approach, which makes it possible to differentiate between models at a fixed lag time making use of short trajectories. The hierarchical definition of the models allows one to compare instances with any number of states. We apply a conjugate prior for reversible Markov chains, which was recently introduced in the statistics literature. The method is tested in two different systems, a Monte Carlo dynamics simulation of a two-dimensional model system and molecular dynamics simulations of the terminally blocked alanine dipeptide.
Hidden Markov Models for Zero-Inflated Poisson Counts with an Application to Substance Use
DeSantis, Stacia M.; Bandyopadhyay, Dipankar
2011-01-01
Paradigms for substance abuse cue-reactivity research involve short term pharmacological or stressful stimulation designed to elicit stress and craving responses in cocaine-dependent subjects. It is unclear as to whether stress induced from participation in such studies increases drug-seeking behavior. We propose a 2-state Hidden Markov model to model the number of cocaine abuses per week before and after participation in a stress- and cue-reactivity study. The hypothesized latent state corresponds to ‘high’ or ‘low’ use. To account for a preponderance of zeros, we assume a zero-inflated Poisson model for the count data. Transition probabilities depend on the prior week’s state, fixed demographic variables, and time-varying covariates. We adopt a Bayesian approach to model fitting, and use the conditional predictive ordinate statistic to demonstrate that the zero-inflated Poisson hidden Markov model outperforms other models for longitudinal count data. PMID:21538455
Wenzel, Jan Holzer, Andre; Wormit, Michael; Dreuw, Andreas
2015-06-07
The extended second order algebraic-diagrammatic construction (ADC(2)-x) scheme for the polarization operator in combination with core-valence separation (CVS) approximation is well known to be a powerful quantum chemical method for the calculation of core-excited states and the description of X-ray absorption spectra. For the first time, the implementation and results of the third order approach CVS-ADC(3) are reported. Therefore, the CVS approximation has been applied to the ADC(3) working equations and the resulting terms have been implemented efficiently in the adcman program. By treating the α and β spins separately from each other, the unrestricted variant CVS-UADC(3) for the treatment of open-shell systems has been implemented as well. The performance and accuracy of the CVS-ADC(3) method are demonstrated with respect to a set of small and middle-sized organic molecules. Therefore, the results obtained at the CVS-ADC(3) level are compared with CVS-ADC(2)-x values as well as experimental data by calculating complete basis set limits. The influence of basis sets is further investigated by employing a large set of different basis sets. Besides the accuracy of core-excitation energies and oscillator strengths, the importance of cartesian basis functions and the treatment of orbital relaxation effects are analyzed in this work as well as computational timings. It turns out that at the CVS-ADC(3) level, the results are not further improved compared to CVS-ADC(2)-x and experimental data, because the fortuitous error compensation inherent in the CVS-ADC(2)-x approach is broken. While CVS-ADC(3) overestimates the core excitation energies on average by 0.61% ± 0.31%, CVS-ADC(2)-x provides an averaged underestimation of −0.22% ± 0.12%. Eventually, the best agreement with experiments can be achieved using the CVS-ADC(2)-x method in combination with a diffuse cartesian basis set at least at the triple-ζ level.
NASA Astrophysics Data System (ADS)
Wenzel, Jan; Holzer, Andre; Wormit, Michael; Dreuw, Andreas
2015-06-01
The extended second order algebraic-diagrammatic construction (ADC(2)-x) scheme for the polarization operator in combination with core-valence separation (CVS) approximation is well known to be a powerful quantum chemical method for the calculation of core-excited states and the description of X-ray absorption spectra. For the first time, the implementation and results of the third order approach CVS-ADC(3) are reported. Therefore, the CVS approximation has been applied to the ADC(3) working equations and the resulting terms have been implemented efficiently in the adcman program. By treating the α and β spins separately from each other, the unrestricted variant CVS-UADC(3) for the treatment of open-shell systems has been implemented as well. The performance and accuracy of the CVS-ADC(3) method are demonstrated with respect to a set of small and middle-sized organic molecules. Therefore, the results obtained at the CVS-ADC(3) level are compared with CVS-ADC(2)-x values as well as experimental data by calculating complete basis set limits. The influence of basis sets is further investigated by employing a large set of different basis sets. Besides the accuracy of core-excitation energies and oscillator strengths, the importance of cartesian basis functions and the treatment of orbital relaxation effects are analyzed in this work as well as computational timings. It turns out that at the CVS-ADC(3) level, the results are not further improved compared to CVS-ADC(2)-x and experimental data, because the fortuitous error compensation inherent in the CVS-ADC(2)-x approach is broken. While CVS-ADC(3) overestimates the core excitation energies on average by 0.61% ± 0.31%, CVS-ADC(2)-x provides an averaged underestimation of -0.22% ± 0.12%. Eventually, the best agreement with experiments can be achieved using the CVS-ADC(2)-x method in combination with a diffuse cartesian basis set at least at the triple-ζ level.
Wenzel, Jan; Holzer, Andre; Wormit, Michael; Dreuw, Andreas
2015-06-01
The extended second order algebraic-diagrammatic construction (ADC(2)-x) scheme for the polarization operator in combination with core-valence separation (CVS) approximation is well known to be a powerful quantum chemical method for the calculation of core-excited states and the description of X-ray absorption spectra. For the first time, the implementation and results of the third order approach CVS-ADC(3) are reported. Therefore, the CVS approximation has been applied to the ADC(3) working equations and the resulting terms have been implemented efficiently in the adcman program. By treating the α and β spins separately from each other, the unrestricted variant CVS-UADC(3) for the treatment of open-shell systems has been implemented as well. The performance and accuracy of the CVS-ADC(3) method are demonstrated with respect to a set of small and middle-sized organic molecules. Therefore, the results obtained at the CVS-ADC(3) level are compared with CVS-ADC(2)-x values as well as experimental data by calculating complete basis set limits. The influence of basis sets is further investigated by employing a large set of different basis sets. Besides the accuracy of core-excitation energies and oscillator strengths, the importance of cartesian basis functions and the treatment of orbital relaxation effects are analyzed in this work as well as computational timings. It turns out that at the CVS-ADC(3) level, the results are not further improved compared to CVS-ADC(2)-x and experimental data, because the fortuitous error compensation inherent in the CVS-ADC(2)-x approach is broken. While CVS-ADC(3) overestimates the core excitation energies on average by 0.61% ± 0.31%, CVS-ADC(2)-x provides an averaged underestimation of -0.22% ± 0.12%. Eventually, the best agreement with experiments can be achieved using the CVS-ADC(2)-x method in combination with a diffuse cartesian basis set at least at the triple-ζ level. PMID:26049476
Identification of observer/Kalman filter Markov parameters: Theory and experiments
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh; Horta, Lucas G.; Longman, Richard W.
1991-01-01
An algorithm to compute Markov parameters of an observer or Kalman filter from experimental input and output data is discussed. The Markov parameters can then be used for identification of a state space representation, with associated Kalman gain or observer gain, for the purpose of controller design. The algorithm is a non-recursive matrix version of two recursive algorithms developed in previous works for different purposes. The relationship between these other algorithms is developed. The new matrix formulation here gives insight into the existence and uniqueness of solutions of certain equations and gives bounds on the proper choice of observer order. It is shown that if one uses data containing noise, and seeks the fastest possible deterministic observer, the deadbeat observer, one instead obtains the Kalman filter, which is the fastest possible observer in the stochastic environment. Results are demonstrated in numerical studies and in experiments on an ten-bay truss structure.
Markov state models and molecular alchemy
NASA Astrophysics Data System (ADS)
Schütte, Christof; Nielsen, Adam; Weber, Marcus
2015-01-01
In recent years, Markov state models (MSMs) have attracted a considerable amount of attention with regard to modelling conformation changes and associated function of biomolecular systems. They have been used successfully, e.g. for peptides including time-resolved spectroscopic experiments, protein function and protein folding , DNA and RNA, and ligand-receptor interaction in drug design and more complicated multivalent scenarios. In this article, a novel reweighting scheme is introduced that allows to construct an MSM for certain molecular system out of an MSM for a similar system. This permits studying how molecular properties on long timescales differ between similar molecular systems without performing full molecular dynamics simulations for each system under consideration. The performance of the reweighting scheme is illustrated for simple test cases, including one where the main wells of the respective energy landscapes are located differently and an alchemical transformation of butane to pentane where the dimension of the state space is changed.
Estimation and uncertainty of reversible Markov models
NASA Astrophysics Data System (ADS)
Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank
2015-11-01
Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software — http://pyemma.org — as of version 2.0.
SHARP ENTRYWISE PERTURBATION BOUNDS FOR MARKOV CHAINS
THIEDE, ERIK; VAN KOTEN, BRIAN; WEARE, JONATHAN
2015-01-01
For many Markov chains of practical interest, the invariant distribution is extremely sensitive to perturbations of some entries of the transition matrix, but insensitive to others; we give an example of such a chain, motivated by a problem in computational statistical physics. We have derived perturbation bounds on the relative error of the invariant distribution that reveal these variations in sensitivity. Our bounds are sharp, we do not impose any structural assumptions on the transition matrix or on the perturbation, and computing the bounds has the same complexity as computing the invariant distribution or computing other bounds in the literature. Moreover, our bounds have a simple interpretation in terms of hitting times, which can be used to draw intuitive but rigorous conclusions about the sensitivity of a chain to various types of perturbations. PMID:26491218
Probabilistic Resilience in Hidden Markov Models
NASA Astrophysics Data System (ADS)
Panerati, Jacopo; Beltrame, Giovanni; Schwind, Nicolas; Zeltner, Stefan; Inoue, Katsumi
2016-05-01
Originally defined in the context of ecological systems and environmental sciences, resilience has grown to be a property of major interest for the design and analysis of many other complex systems: resilient networks and robotics systems other the desirable capability of absorbing disruption and transforming in response to external shocks, while still providing the services they were designed for. Starting from an existing formalization of resilience for constraint-based systems, we develop a probabilistic framework based on hidden Markov models. In doing so, we introduce two new important features: stochastic evolution and partial observability. Using our framework, we formalize a methodology for the evaluation of probabilities associated with generic properties, we describe an efficient algorithm for the computation of its essential inference step, and show that its complexity is comparable to other state-of-the-art inference algorithms.
Forest Pest Occurrence Predictionca-Markov Model
NASA Astrophysics Data System (ADS)
Xie, Fangyi; Zhang, Xiaoli; Chen, Xiaoyan
Since the spatial pattern of forest pest occurrence is determined by biological characteristics and habitat conditions, this paper introduced construction of a cellular automaton model combined with Markov model to predicate the forest pest occurrence. Rules of the model includes the cell states rules, neighborhood rules and transition rules which are defined according to the factors from stand conditions, stand structures, climate and the influence of the factors on the state conversion. Coding for the model is also part of the implementations of the model. The participants were designed including attributes and operations of participants expressed with a UML diagram. Finally, the scale issues on forest pest occurrence prediction, of which the core are the prediction of element size and time interval, are partly discussed in this paper.
Defect Detection Using Hidden Markov Random Fields
NASA Astrophysics Data System (ADS)
Dogandžić, Aleksandar; Eua-anant, Nawanat; Zhang, Benhong
2005-04-01
We derive an approximate maximum a posteriori (MAP) method for detecting NDE defect signals using hidden Markov random fields (HMRFs). In the proposed HMRF framework, a set of spatially distributed NDE measurements is assumed to form a noisy realization of an underlying random field that has a simple structure with Markovian dependence. Here, the random field describes the defect signals to be estimated or detected. The HMRF models incorporate measurement locations into the statistical analysis, which is important in scenarios where the same defect affects measurements at multiple locations. We also discuss initialization of the proposed HMRF detector and apply to simulated eddy-current data and experimental ultrasonic C-scan data from an inspection of a cylindrical Ti 6-4 billet.
Multivariate Markov chain modeling for stock markets
NASA Astrophysics Data System (ADS)
Maskawa, Jun-ichi
2003-06-01
We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.
Transition-Independent Decentralized Markov Decision Processes
NASA Technical Reports Server (NTRS)
Becker, Raphen; Silberstein, Shlomo; Lesser, Victor; Goldman, Claudia V.; Morris, Robert (Technical Monitor)
2003-01-01
There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied up by a global reward function that depends on both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.
Differential evolution Markov chain with snooker updater and fewer chains
Vrugt, Jasper A; Ter Braak, Cajo J F
2008-01-01
Differential Evolution Markov Chain (DE-MC) is an adaptive MCMC algorithm, in which multiple chains are run in parallel. Standard DE-MC requires at least N=2d chains to be run in parallel, where d is the dimensionality of the posterior. This paper extends DE-MC with a snooker updater and shows by simulation and real examples that DE-MC can work for d up to 50--100 with fewer parallel chains (e.g. N=3) by exploiting information from their past by generating jumps from differences of pairs of past states. This approach extends the practical applicability of DE-MC and is shown to be about 5--26 times more efficient than the optimal Normal random walk Metropolis sampler for the 97.5% point of a variable from a 25--50 dimensional Student T{sub 3} distribution. In a nonlinear mixed effects model example the approach outperformed a block-updater geared to the specific features of the model.
Efficient inference of hidden Markov models from large observation sequences
NASA Astrophysics Data System (ADS)
Priest, Benjamin W.; Cybenko, George
2016-05-01
The hidden Markov model (HMM) is widely used to model time series data. However, the conventional Baum- Welch algorithm is known to perform poorly when applied to long observation sequences. The literature contains several alternatives that seek to improve the memory or time complexity of the algorithm. However, for an HMM with N states and an observation sequence of length T, these alternatives require at best O(N) space and O(N2T) time. Given the preponderance of applications that increasingly deal with massive amounts of data, an alternative whose time is O(T)+poly(N) is desired. Recent research presents an alternative to the Baum-Welch algorithm that relies on nonnegative matrix factorization. This document examines the space complexity of this alternative approach and proposes further optimizations using approaches adopted from the matrix sketching literature. The result is a streaming algorithm whose space complexity is constant and time complexity is linear with respect to the size of the observation sequence. The paper also presents a batch algorithm that allow for even further improved space complexity at the expense of an additional pass over the observation sequence.
A hidden markov model derived structural alphabet for proteins.
Camproux, A C; Gautier, R; Tufféry, P
2004-06-01
Understanding and predicting protein structures depends on the complexity and the accuracy of the models used to represent them. We have set up a hidden Markov model that discretizes protein backbone conformation as series of overlapping fragments (states) of four residues length. This approach learns simultaneously the geometry of the states and their connections. We obtain, using a statistical criterion, an optimal systematic decomposition of the conformational variability of the protein peptidic chain in 27 states with strong connection logic. This result is stable over different protein sets. Our model fits well the previous knowledge related to protein architecture organisation and seems able to grab some subtle details of protein organisation, such as helix sub-level organisation schemes. Taking into account the dependence between the states results in a description of local protein structure of low complexity. On an average, the model makes use of only 8.3 states among 27 to describe each position of a protein structure. Although we use short fragments, the learning process on entire protein conformations captures the logic of the assembly on a larger scale. Using such a model, the structure of proteins can be reconstructed with an average accuracy close to 1.1A root-mean-square deviation and for a complexity of only 3. Finally, we also observe that sequence specificity increases with the number of states of the structural alphabet. Such models can constitute a very relevant approach to the analysis of protein architecture in particular for protein structure prediction. PMID:15147844
Chandrasekar, A; Rakkiyappan, R; Cao, Jinde; Lakshmanan, S
2014-09-01
We extend the notion of Synchronization of memristor-based recurrent neural networks with two delay components based on second-order reciprocally convex approach. Some sufficient conditions are obtained to guarantee the synchronization of the memristor-based recurrent neural networks via delay-dependent output feedback controller in terms of linear matrix inequalities (LMIs). The activation functions are assumed to be of further common descriptions, which take a broad view and recover many of those existing methods. A Lyapunov-Krasovskii functional (LKF) with triple-integral terms is addressed in this paper to condense conservatism in the synchronization of systems with additive time-varying delays. Jensen's inequality is applied in partitioning the double integral terms in the derivation of LMIs and then a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters has emerged. Meanwhile, this paper puts forward a well-organized method to manipulate such a combination by extending the lower bound lemma. The obtained conditions not only have less conservatism but also less decision variables than existing results. Finally, numerical results and its simulations are given to show the effectiveness of the proposed memristor-based synchronization control scheme. PMID:24953308
Cluster-based reduced-order modelling of shear flows
NASA Astrophysics Data System (ADS)
Kaiser, Eurika; Noack, Bernd R.; Cordier, Laurent; Spohn, Andreas; Segond, Marc; Abel, Markus; Daviller, Guillaume; Morzyński, Marek; Östh, Jan; Krajnović, Siniša; Niven, Robert K.
2014-12-01
Cluster-based reduced-order modelling (CROM) builds on the pioneering works of Gunzburger's group in cluster analysis [1] and Eckhardt's group in transition matrix models [2] and constitutes a potential alternative to reduced-order models based on a proper-orthogonal decomposition (POD). This strategy frames a time-resolved sequence of flow snapshots into a Markov model for the probabilities of cluster transitions. The information content of the Markov model is assessed with a Kullback-Leibler entropy. This entropy clearly discriminates between prediction times in which the initial conditions can be inferred by backward integration and the predictability horizon after which all information about the initial condition is lost. This approach is exemplified for a class of fluid dynamical benchmark problems like the periodic cylinder wake, the spatially evolving incompressible mixing layer, the bi-modal bluff body wake, and turbulent jet noise. For these examples, CROM is shown to distil nontrivial quasi-attractors and transition processes. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, and for the identification of precursors to desirable and undesirable events.
Green, P. L.; Worden, K.
2015-01-01
In this paper, the authors outline the general principles behind an approach to Bayesian system identification and highlight the benefits of adopting a Bayesian framework when attempting to identify models of nonlinear dynamical systems in the presence of uncertainty. It is then described how, through a summary of some key algorithms, many of the potential difficulties associated with a Bayesian approach can be overcome through the use of Markov chain Monte Carlo (MCMC) methods. The paper concludes with a case study, where an MCMC algorithm is used to facilitate the Bayesian system identification of a nonlinear dynamical system from experimentally observed acceleration time histories. PMID:26303916
Hierarchical Markov random-field modeling for texture classification in chest radiographs
NASA Astrophysics Data System (ADS)
Vargas-Voracek, Rene; Floyd, Carey E., Jr.; Nolte, Loren W.; McAdams, Page
1996-04-01
A hierarchical Markov random field (MRF) modeling approach is presented for the classification of textures in selected regions of interest (ROIs) of chest radiographs. The procedure integrates possible texture classes and their spatial definition with other components present in an image such as noise and background trend. Classification is performed as a maximum a-posteriori (MAP) estimation of texture class and involves an iterative Gibbs- sampling technique. Two cases are studied: classification of lung parenchyma versus bone and classification of normal lung parenchyma versus miliary tuberculosis (MTB). Accurate classification was obtained for all examined cases showing the potential of the proposed modeling approach for texture analysis of radiographic images.
Forecasting Tehran stock exchange volatility; Markov switching GARCH approach
NASA Astrophysics Data System (ADS)
Abounoori, Esmaiel; Elmi, Zahra (Mila); Nademi, Younes
2016-03-01
This paper evaluates several GARCH models regarding their ability to forecast volatility in Tehran Stock Exchange (TSE). These include GARCH models with both Gaussian and fat-tailed residual conditional distribution, concerning their ability to describe and forecast volatility from 1-day to 22-day horizon. Results indicate that AR(2)-MRSGARCH-GED model outperforms other models at one-day horizon. Also, the AR(2)-MRSGARCH-GED as well as AR(2)-MRSGARCH-t models outperform other models at 5-day horizon. In 10 day horizon, three models of AR(2)-MRSGARCH outperform other models. Concerning 22 day forecast horizon, results indicate no differences between MRSGARCH models with that of standard GARCH models. Regarding Risk management out-of-sample evaluation (95% VaR), a few models seem to provide reasonable and accurate VaR estimates at 1-day horizon, with a coverage rate close to the nominal level. According to the risk management loss functions, there is not a uniformly most accurate model.
Ensemble hidden Markov models with application to landmine detection
NASA Astrophysics Data System (ADS)
Hamdi, Anis; Frigui, Hichem
2015-12-01
We introduce an ensemble learning method for temporal data that uses a mixture of hidden Markov models (HMM). We hypothesize that the data are generated by K models, each of which reflects a particular trend in the data. The proposed approach, called ensemble HMM (eHMM), is based on clustering within the log-likelihood space and has two main steps. First, one HMM is fit to each of the N individual training sequences. For each fitted model, we evaluate the log-likelihood of each sequence. This results in an N-by-N log-likelihood distance matrix that will be partitioned into K groups using a relational clustering algorithm. In the second step, we learn the parameters of one HMM per cluster. We propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum classification error (MCE), and the variational Bayesian (VB) training approaches. Finally, to test a new sequence, its likelihood is computed in all the models and a final confidence value is assigned by combining the models' outputs using an artificial neural network. We propose both discrete and continuous versions of the eHMM. Our approach was evaluated on a real-world application for landmine detection using ground-penetrating radar (GPR). Results show that both the continuous and discrete eHMM can identify meaningful and coherent HMM mixture components that describe different properties of the data. Each HMM mixture component models a group of data that share common attributes. These attributes are reflected in the mixture model's parameters. The results indicate that the proposed method outperforms the baseline HMM that uses one model for each class in the data.
Time series segmentation with shifting means hidden markov models
NASA Astrophysics Data System (ADS)
Kehagias, Ath.; Fortin, V.
2006-08-01
We present a new family of hidden Markov models and apply these to the segmentation of hydrological and environmental time series. The proposed hidden Markov models have a discrete state space and their structure is inspired from the shifting means models introduced by Chernoff and Zacks and by Salas and Boes. An estimation method inspired from the EM algorithm is proposed, and we show that it can accurately identify multiple change-points in a time series. We also show that the solution obtained using this algorithm can serve as a starting point for a Monte-Carlo Markov chain Bayesian estimation method, thus reducing the computing time needed for the Markov chain to converge to a stationary distribution.
MODELING PAVEMENT DETERIORATION PROCESSES BY POISSON HIDDEN MARKOV MODELS
NASA Astrophysics Data System (ADS)
Nam, Le Thanh; Kaito, Kiyoyuki; Kobayashi, Kiyoshi; Okizuka, Ryosuke
In pavement management, it is important to estimate lifecycle cost, which is composed of the expenses for repairing local damages, including potholes, and repairing and rehabilitating the surface and base layers of pavements, including overlays. In this study, a model is produced under the assumption that the deterioration process of pavement is a complex one that includes local damages, which occur frequently, and the deterioration of the surface and base layers of pavement, which progresses slowly. The variation in pavement soundness is expressed by the Markov deterioration model and the Poisson hidden Markov deterioration model, in which the frequency of local damage depends on the distribution of pavement soundness, is formulated. In addition, the authors suggest a model estimation method using the Markov Chain Monte Carlo (MCMC) method, and attempt to demonstrate the applicability of the proposed Poisson hidden Markov deterioration model by studying concrete application cases.
Universal recovery map for approximate Markov chains
Sutter, David; Fawzi, Omar; Renner, Renato
2016-01-01
A central question in quantum information theory is to determine how well lost information can be reconstructed. Crucially, the corresponding recovery operation should perform well without knowing the information to be reconstructed. In this work, we show that the quantum conditional mutual information measures the performance of such recovery operations. More precisely, we prove that the conditional mutual information I(A:C|B) of a tripartite quantum state ρABC can be bounded from below by its distance to the closest recovered state RB→BC(ρAB), where the C-part is reconstructed from the B-part only and the recovery map RB→BC merely depends on ρBC. One particular application of this result implies the equivalence between two different approaches to define topological order in quantum systems. PMID:27118889
Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions
Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G. E-mail: gerhard.hummer@biophys.mpg.de; Hummer, Gerhard E-mail: gerhard.hummer@biophys.mpg.de
2014-09-21
Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.
Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions.
Nedialkova, Lilia V; Amat, Miguel A; Kevrekidis, Ioannis G; Hummer, Gerhard
2014-09-21
Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small--but nontrivial--differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space. PMID:25240340
Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions
NASA Astrophysics Data System (ADS)
Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G.; Hummer, Gerhard
2014-09-01
Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.
STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning
Kappel, David; Nessler, Bernhard; Maass, Wolfgang
2014-01-01
In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task. PMID:24675787
Markov random field restoration of point correspondences for active shape modeling
NASA Astrophysics Data System (ADS)
Hilger, Klaus B.; Paulsen, Rasmus R.; Larsen, Rasmus
2004-05-01
In this paper it is described how to build a statistical shape model using a training set with a sparse of landmarks. A well defined model mesh is selected and fitted to all shapes in the training set using thin plate spline warping. This is followed by a projection of the points of the warped model mesh to the target shapes. When this is done by a nearest neighbour projection it can result in folds and inhomogeneities in the correspondence vector field. The novelty in this paper is the use and extension of a Markov random field regularisation of the correspondence field. The correspondence field is regarded as a collection of random variables, and using the Hammersley-Clifford theorem it is proved that it can be treated as a Markov Random Field. The problem of finding the optimal correspondence field is cast into a Bayesian framework for Markov Random Field restoration, where the prior distribution is a smoothness term and the observation model is the curvature of the shapes. The Markov Random Field is optimised using a combination of Gibbs sampling and the Metropolis-Hasting algorithm. The parameters of the model are found using a leave-one-out approach. The method leads to a generative model that produces highly homogeneous polygonised shapes with improved reconstruction capabilities of the training data. Furthermore, the method leads to an overall reduction in the total variance of the resulting point distribution model. The method is demonstrated on a set of human ear canals extracted from 3D-laser scans.
Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G
2013-05-20
A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data. PMID
NonMarkov Ito Processes with 1- state memory
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.
2010-08-01
A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.
A Markov Random Field Groupwise Registration Framework for Face Recognition
Liao, Shu; Shen, Dinggang; Chung, Albert C.S.
2014-01-01
In this paper, we propose a new framework for tackling face recognition problem. The face recognition problem is formulated as groupwise deformable image registration and feature matching problem. The main contributions of the proposed method lie in the following aspects: (1) Each pixel in a facial image is represented by an anatomical signature obtained from its corresponding most salient scale local region determined by the survival exponential entropy (SEE) information theoretic measure. (2) Based on the anatomical signature calculated from each pixel, a novel Markov random field based groupwise registration framework is proposed to formulate the face recognition problem as a feature guided deformable image registration problem. The similarity between different facial images are measured on the nonlinear Riemannian manifold based on the deformable transformations. (3) The proposed method does not suffer from the generalizability problem which exists commonly in learning based algorithms. The proposed method has been extensively evaluated on four publicly available databases: FERET, CAS-PEAL-R1, FRGC ver 2.0, and the LFW. It is also compared with several state-of-the-art face recognition approaches, and experimental results demonstrate that the proposed method consistently achieves the highest recognition rates among all the methods under comparison. PMID:25506109
Optical character recognition of handwritten Arabic using hidden Markov models
NASA Astrophysics Data System (ADS)
Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.
2011-04-01
The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.
Biomedical image analysis using Markov random fields & efficient linear programing.
Komodakis, Nikos; Besbes, Ahmed; Glocker, Ben; Paragios, Nikos
2009-01-01
Computer-aided diagnosis through biomedical image analysis is increasingly considered in health sciences. This is due to the progress made on the acquisition side, as well as on the processing one. In vivo visualization of human tissues where one can determine both anatomical and functional information is now possible. The use of these images with efficient intelligent mathematical and processing tools allows the interpretation of the tissues state and facilitates the task of the physicians. Segmentation and registration are the two most fundamental tools in bioimaging. The first aims to provide automatic tools for organ delineation from images, while the second focuses on establishing correspondences between observations inter and intra subject and modalities. In this paper, we present some recent results towards a common formulation addressing these problems, called the Markov Random Fields. Such an approach is modular with respect to the application context, can be easily extended to deal with various modalities, provides guarantees on the optimality properties of the obtained solution and is computationally efficient. PMID:19963682
A comparison of weighted ensemble and Markov state model methodologies
NASA Astrophysics Data System (ADS)
Feng, Haoyun; Costaouec, Ronan; Darve, Eric; Izaguirre, Jesús A.
2015-06-01
Computation of reaction rates and elucidation of reaction mechanisms are two of the main goals of molecular dynamics (MD) and related simulation methods. Since it is time consuming to study reaction mechanisms over long time scales using brute force MD simulations, two ensemble methods, Markov State Models (MSMs) and Weighted Ensemble (WE), have been proposed to accelerate the procedure. Both approaches require clustering of microscopic configurations into networks of "macro-states" for different purposes. MSMs model a discretization of the original dynamics on the macro-states. Accuracy of the model significantly relies on the boundaries of macro-states. On the other hand, WE uses macro-states to formulate a resampling procedure that kills and splits MD simulations for achieving better efficiency of sampling. Comparing to MSMs, accuracy of WE rate predictions is less sensitive to the definition of macro-states. Rigorous numerical experiments using alanine dipeptide and penta-alanine support our analyses. It is shown that MSMs introduce significant biases in the computation of reaction rates, which depend on the boundaries of macro-states, and Accelerated Weighted Ensemble (AWE), a formulation of weighted ensemble that uses the notion of colors to compute fluxes, has reliable flux estimation on varying definitions of macro-states. Our results suggest that whereas MSMs provide a good idea of the metastable sets and visualization of overall dynamics, AWE provides reliable rate estimations requiring less efforts on defining macro-states on the high dimensional conformational space.
Finding and Testing Network Communities by Lumped Markov Chains
Piccardi, Carlo
2011-01-01
Identifying communities (or clusters), namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. Yet, there is a lack of formal criteria for defining communities and for testing their significance. We propose a sharp definition that is based on a quality threshold. By means of a lumped Markov chain model of a random walker, a quality measure called “persistence probability” is associated to a cluster, which is then defined as an “-community” if such a probability is not smaller than . Consistently, a partition composed of -communities is an “-partition.” These definitions turn out to be very effective for finding and testing communities. If a set of candidate partitions is available, setting the desired -level allows one to immediately select the -partition with the finest decomposition. Simultaneously, the persistence probabilities quantify the quality of each single community. Given its ability in individually assessing each single cluster, this approach can also disclose single well-defined communities even in networks that overall do not possess a definite clusterized structure. PMID:22073245
Bayesian inference for Markov jump processes with informative observations.
Golightly, Andrew; Wilkinson, Darren J
2015-04-01
In this paper we consider the problem of parameter inference for Markov jump process (MJP) representations of stochastic kinetic models. Since transition probabilities are intractable for most processes of interest yet forward simulation is straightforward, Bayesian inference typically proceeds through computationally intensive methods such as (particle) MCMC. Such methods ostensibly require the ability to simulate trajectories from the conditioned jump process. When observations are highly informative, use of the forward simulator is likely to be inefficient and may even preclude an exact (simulation based) analysis. We therefore propose three methods for improving the efficiency of simulating conditioned jump processes. A conditioned hazard is derived based on an approximation to the jump process, and used to generate end-point conditioned trajectories for use inside an importance sampling algorithm. We also adapt a recently proposed sequential Monte Carlo scheme to our problem. Essentially, trajectories are reweighted at a set of intermediate time points, with more weight assigned to trajectories that are consistent with the next observation. We consider two implementations of this approach, based on two continuous approximations of the MJP. We compare these constructs for a simple tractable jump process before using them to perform inference for a Lotka-Volterra system. The best performing construct is used to infer the parameters governing a simple model of motility regulation in Bacillus subtilis. PMID:25720091
Optical character recognition of handwritten Arabic using hidden Markov models
Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M
2011-01-01
The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.
Markov chain Monte Carlo methods: an introductory example
NASA Astrophysics Data System (ADS)
Klauenberg, Katy; Elster, Clemens
2016-02-01
When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.
Learning With l1 -Regularizer Based on Markov Resampling.
Gong, Tieliang; Zou, Bin; Xu, Zongben
2016-05-01
Learning with l1 -regularizer has brought about a great deal of research in learning theory community. Previous known results for the learning with l1 -regularizer are based on the assumption that samples are independent and identically distributed (i.i.d.), and the best obtained learning rate for the l1 -regularization type algorithms is O(1/√m) , where m is the samples size. This paper goes beyond the classic i.i.d. framework and investigates the generalization performance of least square regression with l1 -regularizer ( l1 -LSR) based on uniformly ergodic Markov chain (u.e.M.c) samples. On the theoretical side, we prove that the learning rate of l1 -LSR for u.e.M.c samples l1 -LSR(M) is with the order of O(1/m) , which is faster than O(1/√m) for the i.i.d. counterpart. On the practical side, we propose an algorithm based on resampling scheme to generate u.e.M.c samples. We show that the proposed l1 -LSR(M) improves on the l1 -LSR(i.i.d.) in generalization error at the low cost of u.e.M.c resampling. PMID:26011874
Recursive recovery of Markov transition probabilities from boundary value data
Patch, S.K.
1994-04-01
In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requires finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 {times} 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 {times} 2 {times} 2 problem, is solved.
Dynamical symmetries of Markov processes with multiplicative white noise
NASA Astrophysics Data System (ADS)
Aron, Camille; Barci, Daniel G.; Cugliandolo, Leticia F.; González Arenas, Zochil; Lozano, Gustavo S.
2016-05-01
We analyse various properties of stochastic Markov processes with multiplicative white noise. We take a single-variable problem as a simple example, and we later extend the analysis to the Landau–Lifshitz–Gilbert equation for the stochastic dynamics of a magnetic moment. In particular, we focus on the non-equilibrium transfer of angular momentum to the magnetization from a spin-polarised current of electrons, a technique which is widely used in the context of spintronics to manipulate magnetic moments. We unveil two hidden dynamical symmetries of the generating functionals of these Markovian multiplicative white-noise processes. One symmetry only holds in equilibrium and we use it to prove generic relations such as the fluctuation-dissipation theorems. Out of equilibrium, we take profit of the symmetry-breaking terms to prove fluctuation theorems. The other symmetry yields strong dynamical relations between correlation and response functions which can notably simplify the numerical analysis of these problems. Our construction allows us to clarify some misconceptions on multiplicative white-noise stochastic processes that can be found in the literature. In particular, we show that a first-order differential equation with multiplicative white noise can be transformed into an additive-noise equation, but that the latter keeps a non-trivial memory of the discretisation prescription used to define the former.
Manpower planning using Markov Chain model
NASA Astrophysics Data System (ADS)
Saad, Syafawati Ab; Adnan, Farah Adibah; Ibrahim, Haslinda; Rahim, Rahela
2014-07-01
Manpower planning is a planning model which understands the flow of manpower based on the policies changes. For such purpose, numerous attempts have been made by researchers to develop a model to investigate the track of movements of lecturers for various universities. As huge number of lecturers in a university, it is difficult to track the movement of lecturers and also there is no quantitative way used in tracking the movement of lecturers. This research is aimed to determine the appropriate manpower model to understand the flow of lecturers in a university in Malaysia by determine the probability and mean time of lecturers remain in the same status rank. In addition, this research also intended to estimate the number of lecturers in different status rank (lecturer, senior lecturer and associate professor). From the previous studies, there are several methods applied in manpower planning model and appropriate method used in this research is Markov Chain model. Results obtained from this study indicate that the appropriate manpower planning model used is validated by compare to the actual data. The smaller margin of error gives a better result which means that the projection is closer to actual data. These results would give some suggestions for the university to plan the hiring lecturers and budgetary for university in future.
Noiseless compression using non-Markov models
NASA Technical Reports Server (NTRS)
Blumer, Anselm
1989-01-01
Adaptive data compression techniques can be viewed as consisting of a model specified by a database common to the encoder and decoder, an encoding rule and a rule for updating the model to ensure that the encoder and decoder always agree on the interpretation of the next transmission. The techniques which fit this framework range from run-length coding, to adaptive Huffman and arithmetic coding, to the string-matching techniques of Lempel and Ziv. The compression obtained by arithmetic coding is dependent on the generality of the source model. For many sources, an independent-letter model is clearly insufficient. Unfortunately, a straightforward implementation of a Markov model requires an amount of space exponential in the number of letters remembered. The Directed Acyclic Word Graph (DAWG) can be constructed in time and space proportional to the text encoded, and can be used to estimate the probabilities required for arithmetic coding based on an amount of memory which varies naturally depending on the encoded text. The tail of that portion of the text which was encoded is the longest suffix that has occurred previously. The frequencies of letters following these previous occurrences can be used to estimate the probability distribution of the next letter. Experimental results indicate that compression is often far better than that obtained using independent-letter models, and sometimes also significantly better than other non-independent techniques.
Optimized Markov state models for metastable systems
NASA Astrophysics Data System (ADS)
Guarnera, Enrico; Vanden-Eijnden, Eric
2016-07-01
A method is proposed to identify target states that optimize a metastability index amongst a set of trial states and use these target states as milestones (or core sets) to build Markov State Models (MSMs). If the optimized metastability index is small, this automatically guarantees the accuracy of the MSM, in the sense that the transitions between the target milestones is indeed approximately Markovian. The method is simple to implement and use, it does not require that the dynamics on the trial milestones be Markovian, and it also offers the possibility to partition the system's state-space by assigning every trial milestone to the target milestones it is most likely to visit next and to identify transition state regions. Here the method is tested on the Gly-Ala-Gly peptide, where it is shown to correctly identify the expected metastable states in the dihedral angle space of the molecule without a priori information about these states. It is also applied to analyze the folding landscape of the Beta3s mini-protein, where it is shown to identify the folded basin as a connecting hub between an helix-rich region, which is entropically stabilized, and a beta-rich region, which is energetically stabilized and acts as a kinetic trap.
The Acquisition of Neg-V and V-Neg Order in Embedded Clauses in Swedish: A Microparametric Approach
ERIC Educational Resources Information Center
Waldmann, Christian
2014-01-01
This article examines the acquisition of embedded verb placement in Swedish children, focusing on Neg-V and V-Neg order. It is proposed that a principle of economy of movement creates an overuse of V-Neg order in embedded clauses and that the low frequency of the target-consistent Neg-V order in child-directed speech obstructs children from…
NASA Astrophysics Data System (ADS)
Wentz, E.; Song, Y.
2011-12-01
Classifying urban area images is challenging because of the heterogeneous nature of the urban landscape. This means that each pixel represents a mixture of classes with potentially highly variable various spectral values. Land cover classification approaches using ancillary data, such as knowledge based or expert systems, have shown to improve the classification accuracy in urban areas, particularly with medium or low-resolution imagery. This is because information other than the spectral signatures is used to assign pixels to classes. Defining rules is challenging and acquiring appropriate ancillary data may not always be possible. The goal of this study is to compare the results of three approaches to classify urban land cover with medium resolution data with and without ancillary information. We compare discriminant analysis, Markov random fields, and an expert system. Furthermore, we explore whether including spatial weights improves classification accuracy of the discriminant model. Discriminant analysis is a statistical technique used to predict group membership for a pixel based on the linear combination of independent variables. Adding spatial weights to this includes a weighted value for neighboring pixels. Markov random fields represent spatial dependencies through conditional relationships defined using Markov principles. In comparison to using spatial dependencies in neighbouring pixels, strict per pixel statistical analysis, however, does not consider the spatial dependencies among neighbouring pixels. Our study showed that approaches using ancillary data continued to outperform strict spectral classifiers but that using a spatial weight improved the results. Furthermore, results demonstrate that when the discriminant analysis technique works well then the spatially weighted approach works better. However, when the discriminant analysis performs ineffectively, those poor results are magnified. This study suggests that spatial weights improve the
First- and second-order information in natural images: a filter-based approach to image statistics
NASA Astrophysics Data System (ADS)
Johnson, Aaron P.; Baker, Curtis L.
2004-06-01
Previous analyses of natural image statistics have dealt mainly with their Fourier power spectra. Here we explore image statistics by examining responses to biologically motivated filters that are spatially localized and respond to first-order (luminance-defined) and second-order (contrast- or texture-defined) characteristics. We compare the distribution of natural image responses across filter parameters for first- and second-order information. We find that second-order information in natural scenes shows the same self-similarity previously described for first-order information but has substantially less orientational anisotropy. The magnitudes of the two kinds of information, as well as their mutual unsigned correlation, are much stronger for particular combinations of filter parameters in natural images but not in unstructured fractal images having the same power spectra.
Continuous-Time Markov Chain–Based Flux Analysis in Metabolism
Ji, Ping
2014-01-01
Abstract Metabolic flux analysis (MFA), a key technology in bioinformatics, is an effective way of analyzing the entire metabolic system by measuring fluxes. Many existing MFA approaches are based on differential equations, which are complicated to be solved mathematically. So MFA requires some simple approaches to investigate metabolism further. In this article, we applied continuous-time Markov chain to MFA, called MMFA approach, and transformed the MFA problem into a set of quadratic equations by analyzing the transition probability of each carbon atom in the entire metabolic system. Unlike the other methods, MMFA analyzes the metabolic model only through the transition probability. This approach is very generic and it could be applied to any metabolic system if all the reaction mechanisms in the system are known. The results of the MMFA approach were compared with several chemical reaction equilibrium constants from early experiments by taking pentose phosphate pathway as an example. PMID:25089363
Accelerating Monte Carlo Markov chains with proxy and error models
NASA Astrophysics Data System (ADS)
Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan
2015-12-01
In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.
Cool walking: a new Markov chain Monte Carlo sampling method.
Brown, Scott; Head-Gordon, Teresa
2003-01-15
Effective relaxation processes for difficult systems like proteins or spin glasses require special simulation techniques that permit barrier crossing to ensure ergodic sampling. Numerous adaptations of the venerable Metropolis Monte Carlo (MMC) algorithm have been proposed to improve its sampling efficiency, including various hybrid Monte Carlo (HMC) schemes, and methods designed specifically for overcoming quasi-ergodicity problems such as Jump Walking (J-Walking), Smart Walking (S-Walking), Smart Darting, and Parallel Tempering. We present an alternative to these approaches that we call Cool Walking, or C-Walking. In C-Walking two Markov chains are propagated in tandem, one at a high (ergodic) temperature and the other at a low temperature. Nonlocal trial moves for the low temperature walker are generated by first sampling from the high-temperature distribution, then performing a statistical quenching process on the sampled configuration to generate a C-Walking jump move. C-Walking needs only one high-temperature walker, satisfies detailed balance, and offers the important practical advantage that the high and low-temperature walkers can be run in tandem with minimal degradation of sampling due to the presence of correlations. To make the C-Walking approach more suitable to real problems we decrease the required number of cooling steps by attempting to jump at intermediate temperatures during cooling. We further reduce the number of cooling steps by utilizing "windows" of states when jumping, which improves acceptance ratios and lowers the average number of cooling steps. We present C-Walking results with comparisons to J-Walking, S-Walking, Smart Darting, and Parallel Tempering on a one-dimensional rugged potential energy surface in which the exact normalized probability distribution is known. C-Walking shows superior sampling as judged by two ergodic measures. PMID:12483676
Markov Logic Networks in the Analysis of Genetic Data
Sakhanenko, Nikita A.
2010-01-01
Abstract Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for genotype-phenotype associations. These statistical methods view the compound effects of multiple genes on a phenotype as a sum of influences of each gene and often miss a substantial part of the heritable effect. Such methods do not use any biological knowledge about underlying mechanisms. Modeling approaches from the artificial intelligence (AI) field that incorporate deterministic knowledge into models to perform statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we can replicate the results of traditional statistical methods, but we also show that we are able to go beyond finding independent markers linked to a phenotype by using joint inference without an independence assumption. The method is applied to genetic data on yeast sporulation, a complex phenotype with gene interactions. In addition to detecting all of the previously identified loci associated with sporulation, our method identifies four loci with smaller effects. Since their effect on sporulation is small, these four loci were not detected with methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which can be used as a general framework for incorporating systems biology with genetics. PMID:20958249
Costa, O. L. V.; Dufour, F.
2011-06-15
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP's) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space Double-Struck-Capital-R {sup n}. The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter {epsilon}>0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as {epsilon} goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as {epsilon} goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
NASA Astrophysics Data System (ADS)
Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.
2008-10-01
SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.
Meissner, Anna M; Christiansen, Fredrik; Martinez, Emmanuelle; Pawley, Matthew D M; Orams, Mark B; Stockin, Karen A
2015-01-01
Common dolphins, Delphinus sp., are one of the marine mammal species tourism operations in New Zealand focus on. While effects of cetacean-watching activities have previously been examined in coastal regions in New Zealand, this study is the first to investigate effects of commercial tourism and recreational vessels on common dolphins in an open oceanic habitat. Observations from both an independent research vessel and aboard commercial tour vessels operating off the central and east coast Bay of Plenty, North Island, New Zealand were used to assess dolphin behaviour and record the level of compliance by permitted commercial tour operators and private recreational vessels with New Zealand regulations. Dolphin behaviour was assessed using two different approaches to Markov chain analysis in order to examine variation of responses of dolphins to vessels. Results showed that, regardless of the variance in Markov methods, dolphin foraging behaviour was significantly altered by boat interactions. Dolphins spent less time foraging during interactions and took significantly longer to return to foraging once disrupted by vessel presence. This research raises concerns about the potential disruption to feeding, a biologically critical behaviour. This may be particularly important in an open oceanic habitat, where prey resources are typically widely dispersed and unpredictable in abundance. Furthermore, because tourism in this region focuses on common dolphins transiting between adjacent coastal locations, the potential for cumulative effects could exacerbate the local effects demonstrated in this study. While the overall level of compliance by commercial operators was relatively high, non-compliance to the regulations was observed with time restriction, number or speed of vessels interacting with dolphins not being respected. Additionally, prohibited swimming with calves did occur. The effects shown in this study should be carefully considered within conservation management
Meissner, Anna M.; Christiansen, Fredrik; Martinez, Emmanuelle; Pawley, Matthew D. M.; Orams, Mark B.; Stockin, Karen A.
2015-01-01
Common dolphins, Delphinus sp., are one of the marine mammal species tourism operations in New Zealand focus on. While effects of cetacean-watching activities have previously been examined in coastal regions in New Zealand, this study is the first to investigate effects of commercial tourism and recreational vessels on common dolphins in an open oceanic habitat. Observations from both an independent research vessel and aboard commercial tour vessels operating off the central and east coast Bay of Plenty, North Island, New Zealand were used to assess dolphin behaviour and record the level of compliance by permitted commercial tour operators and private recreational vessels with New Zealand regulations. Dolphin behaviour was assessed using two different approaches to Markov chain analysis in order to examine variation of responses of dolphins to vessels. Results showed that, regardless of the variance in Markov methods, dolphin foraging behaviour was significantly altered by boat interactions. Dolphins spent less time foraging during interactions and took significantly longer to return to foraging once disrupted by vessel presence. This research raises concerns about the potential disruption to feeding, a biologically critical behaviour. This may be particularly important in an open oceanic habitat, where prey resources are typically widely dispersed and unpredictable in abundance. Furthermore, because tourism in this region focuses on common dolphins transiting between adjacent coastal locations, the potential for cumulative effects could exacerbate the local effects demonstrated in this study. While the overall level of compliance by commercial operators was relatively high, non-compliance to the regulations was observed with time restriction, number or speed of vessels interacting with dolphins not being respected. Additionally, prohibited swimming with calves did occur. The effects shown in this study should be carefully considered within conservation management
NASA Astrophysics Data System (ADS)
Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang
2015-01-01
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Xu, Zuwei; Zhao, Haibo Zheng, Chuguang
2015-01-15
This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are
Analysis of nonstationary signals and fields with the use of enclosed semi-Markov processes
NASA Astrophysics Data System (ADS)
Kravchenko, V. F.; Lutsenko, V. I.; Masalov, S. A.; Pustovoit, V. I.
2013-11-01
In this study, the possibility to describe the signals scattered by various physical objects such as underlying surfaces of land and sea, the segments of "clear sky," and processes of various physical natures, such as fluctuations of the refractive index of the troposphere and electromagnetic and acoustic radiation of a lithospheric nature, by enclosed semi-Markov processes is investigated. This approach makes it possible to construct statistic models for a broad class of signals and processes. In some cases, statistics based on atomic functions and WA systems of the Kravchenko-Rvachev functions show the best results.
D. L. Kelly
2007-06-01
Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the model’s ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.
Multi-state Markov model for disability: A case of Malaysia Social Security (SOCSO)
NASA Astrophysics Data System (ADS)
Samsuddin, Shamshimah; Ismail, Noriszura
2016-06-01
Studies of SOCSO's contributor outcomes like disability are usually restricted to a single outcome. In this respect, the study has focused on the approach of multi-state Markov model for estimating the transition probabilities among SOCSO's contributor in Malaysia between states: work, temporary disability, permanent disability and death at yearly intervals on age, gender, year and disability category; ignoring duration and past disability experience which is not consider of how or when someone arrived in that category. These outcomes represent different states which depend on health status among the workers.
Shen, Mouquan; Park, Ju H
2016-07-01
This paper addresses the H∞ filtering of continuous Markov jump linear systems with general transition probabilities and output quantization. S-procedure is employed to handle the adverse influence of the quantization and a new approach is developed to conquer the nonlinearity induced by uncertain and unknown transition probabilities. Then, sufficient conditions are presented to ensure the filtering error system to be stochastically stable with the prescribed performance requirement. Without specified structure imposed on introduced slack variables, a flexible filter design method is established in terms of linear matrix inequalities. The effectiveness of the proposed method is validated by a numerical example. PMID:27129765
A multi-level solution algorithm for steady-state Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham; Leutenegger, Scott T.
1993-01-01
A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.
A Markov chain technique for determining the acquisition behavior of a digital tracking loop
NASA Technical Reports Server (NTRS)
Chadwick, H. D.
1972-01-01
An iterative procedure is presented for determining the acquisition behavior of discrete or digital implementations of a tracking loop. The technique is based on the theory of Markov chains and provides the cumulative probability of acquisition in the loop as a function of time in the presence of noise and a given set of initial condition probabilities. A digital second-order tracking loop to be used in the Viking command receiver for continuous tracking of the command subcarrier phase was analyzed using this technique, and the results agree closely with experimental data.
Unsupervised Segmentation of Hidden Semi-Markov Non Stationary Chains
NASA Astrophysics Data System (ADS)
Lapuyade-Lahorgue, Jérôme; Pieczynski, Wojciech
2006-11-01
In the classical hidden Markov chain (HMC) model we have a hidden chain X, which is a Markov one and an observed chain Y. HMC are widely used; however, in some situations they have to be replaced by the more general "hidden semi-Markov chains" (HSMC) which are particular "triplet Markov chains" (TMC) T = (X, U, Y), where the auxiliary chain U models the semi-Markovianity of X. Otherwise, non stationary classical HMC can also be modeled by a triplet Markov stationary chain with, as a consequence, the possibility of parameters' estimation. The aim of this paper is to use simultaneously both properties. We consider a non stationary HSMC and model it as a TMC T = (X, U1, U2, Y), where U1 models the semi-Markovianity and U2 models the non stationarity. The TMC T being itself stationary, all parameters can be estimated by the general "Iterative Conditional Estimation" (ICE) method, which leads to unsupervised segmentation. We present some experiments showing the interest of the new model and related processing in image segmentation area.
An approach to segment lung pleura from CT data with high precision
NASA Astrophysics Data System (ADS)
Angelats, E.; Chaisaowong, K.; Knepper, A.; Kraus, T.; Aach, T.
2008-03-01
A new approach to segment pleurae from CT data with high precision is introduced. This approach is developed in the segmentation's framework of an image analysis system to automatically detect pleural thickenings. The new technique to carry out the 3D segmentation of lung pleura is based on supervised range-constrained thresholding and a Gibbs-Markov random field model. An initial segmentation is done using the 3D histogram by supervised range-constrained thresholding. 3D connected component labelling is then applied to find the thorax. In order to detect and remove trachea and bronchi therein, the 3D histogram of connected pulmonary organs is modelled as a finite mixture of Gaussian distributions. Parameters are estimated using the Expectation-Maximization algorithm, which leads to the classification of that pulmonary region. As consequence left and right lungs are separated. Finally we apply a Gibbs-Markov random field model to our initial segmentation in order to achieve a high accuracy segmentation of lung pleura. The Gibbs- Markov random field is combined with maximum a posteriori estimation to estimate optimal pleural contours. With these procedures, a new segmentation strategy is developed in order to improve the reliability and accuracy of the detection of pleural contours and to achieve a better assessment performance of pleural thickenings.
NASA Astrophysics Data System (ADS)
Foreman, Richard J.; Emeis, Stefan; Canadillas, Beatriz
2015-02-01
A turbulence parametrization for wind speed in the stable boundary layer consisting of a single empirical parameter is proposed without the use of the eddy viscosity concept or turbulent kinetic energy equation. Instead, a drag-coefficient-type formulation as a function of the bulk Richardson number has been found to be able to reproduce observed stable boundary-layer wind speeds as effectively as a model based on the eddy viscosity approach. The advantage of this simpler approach is that the model can, in theory, be modified more easily for certain applications, such as the effects of large-scale wind parks on mesoscale meteorology.
A F Pimentel, Marco; Santos, Mauro D; Springer, David B; Clifford, Gari D
2015-08-01
Accurate heart beat detection in signals acquired from intensive care unit (ICU) patients is necessary for establishing both normality and detecting abnormal events. Detection is normally performed by analysing the electrocardiogram (ECG) signal, and alarms are triggered when parameters derived from this signal exceed preset or variable thresholds. However, due to noisy and missing data, these alarms are frequently deemed to be false positives, and therefore ignored by clinical staff. The fusion of features derived from other signals, such as the arterial blood pressure (ABP) or the photoplethysmogram (PPG), has the potential to reduce such false alarms. In order to leverage the highly correlated temporal nature of the physiological signals, a hidden semi-Markov model (HSMM) approach, which uses the intra- and inter-beat depolarization interval, was designed to detect heart beats in such data. Features based on the wavelet transform, signal gradient and signal quality indices were extracted from the ECG and ABP waveforms for use in the HSMM framework. The presented method achieved an overall score of 89.13% on the hidden/test data set provided by the Physionet/Computing in Cardiology Challenge 2014: Robust Detection of Heart Beats in Multimodal Data. PMID:26218536
NASA Astrophysics Data System (ADS)
Luk, B. L.; Liu, K. P.; Tong, F.; Man, K. F.
2010-05-01
The impact-acoustics method utilizes different information contained in the acoustic signals generated by tapping a structure with a small metal object. It offers a convenient and cost-efficient way to inspect the tile-wall bonding integrity. However, the existence of the surface irregularities will cause abnormal multiple bounces in the practical inspection implementations. The spectral characteristics from those bounces can easily be confused with the signals obtained from different bonding qualities. As a result, it will deteriorate the classic feature-based classification methods based on frequency domain. Another crucial difficulty posed by the implementation is the additive noise existing in the practical environments that may also cause feature mismatch and false judgment. In order to solve this problem, the work described in this paper aims to develop a robust inspection method that applies model-based strategy, and utilizes the wavelet domain features with hidden Markov modeling. It derives a bonding integrity recognition approach with enhanced immunity to surface roughness as well as the environmental noise. With the help of the specially designed artificial sample slabs, experiments have been carried out with impact acoustic signals contaminated by real environmental noises acquired under practical inspection background. The results are compared with those using classic method to demonstrate the effectiveness of the proposed method.
ERIC Educational Resources Information Center
Goldingay, S.; Dieppe, P.; Mangan, M.; Marsden, D.
2014-01-01
This critical reflection is based on the belief that creative practitioners should be using their own well-established approaches to trouble dominant paradigms in health and care provision to both form and inform the future of healing provision and well-being creation. It describes work by a transdisciplinary team (drama and medicine) that is…
Multilayer Markov Random Field models for change detection in optical remote sensing images
NASA Astrophysics Data System (ADS)
Benedek, Csaba; Shadaydeh, Maha; Kato, Zoltan; Szirányi, Tamás; Zerubia, Josiane
2015-09-01
In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of Ground Truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches.
MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes
Williams, B.K.
1988-01-01
Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.
Davis, Marauo; Ramirez, Donald A; Hope-Weeks, Louisa J
2013-08-28
Three-dimensionally ordered hierarchically porous alumina, iron(III) oxide, yttria, and nickel oxide have been prepared through the hybridization of colloidal crystal-templating and a modified sol-gel method. Simply, highly ordered arrays of poly(methyl methacrylate) (PMMA) were infiltrated with a precursor solution of metal salt and epoxide. Calcination after solidification of the material removed the polymer template while forming the inverse replicas, simultaneously. These hierarchical structures possessing macropore windows and mesopore walls were characterized by powder X-ray diffraction (PXRD), thermogravimetric analysis (TGA), scanning electron microscopy (SEM), and N2 adsorption/desorption techniques to probe the structural integrity. It was revealed by PXRD that the prepared 3D frameworks were single-phase polycrystalline structures with grain sizes between 5 and 27 nm. The thermal stability as studied by TGA illustrates expected weight losses and full decomposition of the PMMA template. SEM reveals the bimodal, hierarchical macroporous frameworks with well-defined macropore windows and mesoporous walls. Gas sorption measurements of the ordered materials display surface areas as high as 93 m(2) g(-1), and average mesopore diameter up to 33 nm. Due to the versatility of this method, we expect these materials will be ideal candidates for applications in catalysis, adsorption, and separations. Furthermore, the implementation of this technology for production of three-dimensionally ordered macroporous materials can improve the cost and efficiency of metal oxide frameworks (MOFs) due to its high versatility and amenability to numerous systems. PMID:23926949
On mixing of Markov measures associated with b-bistochastic QSOs
NASA Astrophysics Data System (ADS)
Mukhamedov, Farrukh; Embong, Ahmad Fadillah
2016-06-01
New majorization is in advantage as compared to the classical one since it can be defined as a partial order on sequences. We call it as b-order. Further, the defined order is used to establish a bistochasticity of nonlinear operators in which, in this study is restricted to the simplest case of nonlinear operators i.e quadratic operators. The discussions in this paper are based on bistochasticity of Quadratic Stochastic Operators (QSO) with respect to the b-order. In short, such operators are called b-bistochastic QSO. The main objectives in this paper are to show the construction of non-homogeneous Markov measures associated with QSO and to show the defined measures associated with the classes of b-bistochastic QSOs meet the mixing property.
Unsupervised SAR images change detection with hidden Markov chains on a sliding window
NASA Astrophysics Data System (ADS)
Bouyahia, Zied; Benyoussef, Lamia; Derrode, Stéphane
2007-10-01
This work deals with unsupervised change detection in bi-date Synthetic Aperture Radar (SAR) images. Whatever the indicator of change used, e.g. log-ratio or Kullback-Leibler divergence, we have observed poor quality change maps for some events when using the Hidden Markov Chain (HMC) model we focus on in this work. The main reason comes from the stationary assumption involved in this model - and in most Markovian models such as Hidden Markov Random Fields-, which can not be justified in most observed scenes: changed areas are not necessarily stationary in the image. Besides the few non stationary Markov models proposed in the literature, the aim of this paper is to describe a pragmatic solution to tackle stationarity by using a sliding window strategy. In this algorithm, the criterion image is scanned pixel by pixel, and a classical HMC model is applied only on neighboring pixels. By moving the window through the image, the process is able to produce a change map which can better exhibit non stationary changes than the classical HMC applied directly on the whole criterion image. Special care is devoted to the estimation of the number of classes in each window, which can vary from one (no change) to three (positive change, negative change and no change) by using the corrected Akaike Information Criterion (AICc) suited to small samples. The quality assessment of the proposed approach is achieved with speckle-simulated images in which simulated changes is introduced. The windowed strategy is also evaluated with a pair of RADARSAT images bracketing the Nyiragongo volcano eruption event in January 2002. The available ground truth confirms the effectiveness of the proposed approach compared to a classical HMC-based strategy.
Hidden Markov Models for Fault Detection in Dynamic Systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic
1994-01-01
Continuous monitoring of complex dynamic systems is an increasingly important issue in diverse areas such as nuclear plant safety, production line reliability, and medical health monitoring systems. Recent advances in both sensor technology and computational capabilities have made on-line permanent monitoring much more feasible than it was in the past. In this paper it is shown that a pattern recognition system combined with a finite-state hidden Markov model provides a particularly useful method for modelling temporal context in continuous monitoring. The parameters of the Markov model are derived from gross failure statistics such as the mean time between failures. The model is validated on a real-world fault diagnosis problem and it is shown that Markov modelling in this context offers significant practical benefits.
Latent Variable Model for Learning in Pairwise Markov Networks
Amizadeh, Saeed; Hauskrecht, Milos
2011-01-01
Pairwise Markov Networks (PMN) are an important class of Markov networks which, due to their simplicity, are widely used in many applications such as image analysis, bioinformatics, sensor networks, etc. However, learning of Markov networks from data is a challenging task; there are many possible structures one must consider and each of these structures comes with its own parameters making it easy to overfit the model with limited data. To deal with the problem, recent learning methods build upon the L1 regularization to express the bias towards sparse network structures. In this paper, we propose a new and more flexible framework that let us bias the structure, that can, for example, encode the preference to networks with certain local substructures which as a whole exhibit some special global structure. We experiment with and show the benefit of our framework on two types of problems: learning of modular networks and learning of traffic networks models. PMID:22228193
Markov sequential pattern recognition : dependency and the unknown class.
Malone, Kevin Thomas; Haschke, Greg Benjamin; Koch, Mark William
2004-10-01
The sequential probability ratio test (SPRT) minimizes the expected number of observations to a decision and can solve problems in sequential pattern recognition. Some problems have dependencies between the observations, and Markov chains can model dependencies where the state occupancy probability is geometric. For a non-geometric process we show how to use the effective amount of independent information to modify the decision process, so that we can account for the remaining dependencies. Along with dependencies between observations, a successful system needs to handle the unknown class in unconstrained environments. For example, in an acoustic pattern recognition problem any sound source not belonging to the target set is in the unknown class. We show how to incorporate goodness of fit (GOF) classifiers into the Markov SPRT, and determine the worse case nontarget model. We also develop a multiclass Markov SPRT using the GOF concept.
Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.
2009-01-01
Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.
Gama, S.; de Campos, A.; Coelho, A. A.; Alves, C. S.; Ren, Y.; Garcia, F.; Brown, D. E.; da Silva, L. M.; Magnus, A.; Carvalho, G.; Gandra, G. C.; dos Santos, A. O.; Cardoso, L. P.; von Ranke, P. J.; X-Ray Science Division; Univ. Federal de Sao Paulo; Unv. Estadual de Champinas; Univ. Estadual de Maringa Lab. Nacional de Luz Sincrotron; Northern Univ.; Univ. de Estado do Rio de Janerio
2009-01-01
First order phase transitions for materials with exotic properties are usually believed to happen at fixed values of the intensive parameters (such as pressure, temperature, etc.) characterizing their properties. It is also considered that the extensive properties of the phases (such as entropy, volume, etc.) have discontinuities at the transition point, but that for each phase the intensive parameters remain constant during the transition. These features are a hallmark for systems described by two thermodynamic degrees of freedom. In this work it is shown that first order phase transitions must be understood in the broader framework of thermodynamic systems described by three or more degrees of freedom. This means that the transitions occur along intervals of the intensive parameters, that the properties of the phases coexisting during the transition may show peculiar behaviors characteristic of each system, and that a generalized Clausius-Clapeyron equation must be obeyed. These features for the magnetic case are confirmed, and it is shown that experimental calorimetric data agree well with the magnetic Clausius-Clapeyron equation for MnAs. An estimate for the point in the temperature-field plane where the first order magnetic transition turns to a second order one is obtained (the critical parameters) for MnAs and Gd{sub 5}Ge{sub 2}Si{sub 2} compounds. Anomalous behavior of the volumes of the coexisting phases during the magnetic first order transition is measured, and it is shown that the anomalies for the individual phases are hidden in the behavior of the global properties as the volume.
Markov Chain Monte-Carlo Orbit Computation for Binary Asteroids
NASA Astrophysics Data System (ADS)
Oszkiewicz, D.; Hestroffer, D.; Pedro, David C.
2013-11-01
We present a novel method of orbit computation for resolved binary asteroids. The method combines the Thiele, Innes, van den Bos method with a Markov chain Monte Carlo technique (MCMC). The classical Thiele-van den Bos method has been commonly used in multiple applications before, including orbits of binary stars and asteroids; conversely this novel method can be used for the analysis of binary stars, and of other gravitationally bound binaries. The method requires a minimum of three observations (observing times and relative positions - Cartesian or polar) made at the same tangent plane - or close enough for enabling a first approximation. Further, the use of the MCMC technique for statistical inversion yields the whole bundle of possible orbits, including the one that is most probable. In this new method, we make use of the Metropolis-Hastings algorithm to sample the parameters of the Thiele-van den Bos method, that is the orbital period (or equivalently the double areal constant) together with three randomly selected observations from the same tangent plane. The observations are sampled within their observational errors (with an assumed distribution) and the orbital period is the only parameter that has to be tuned during the sampling procedure. We run multiple chains to ensure that the parameter phase space is well sampled and that the solutions have converged. After the sampling is completed we perform convergence diagnostics. The main advantage of the novel approach is that the orbital period does not need to be known in advance and the entire region of possible orbital solutions is sampled resulting in a maximum likelihood solution and the confidence regions. We have tested the new method on several known binary asteroids and conclude a good agreement with the results obtained with other methods. The new method has been implemented into the Gaia DPAC data reduction pipeline and can be used to confirm the binary nature of a suspected system, and for deriving
Modeling sediment transport as a spatio-temporal Markov process.
NASA Astrophysics Data System (ADS)
Heyman, Joris; Ancey, Christophe
2014-05-01
Despite a century of research about sediment transport by bedload occuring in rivers, its constitutive laws remain largely unknown. The proof being that our ability to predict mid-to-long term transported volumes within reasonable confidence interval is almost null. The intrinsic fluctuating nature of bedload transport may be one of the most important reasons why classical approaches fail. Microscopic probabilistic framework has the advantage of taking into account these fluctuations at the particle scale, to understand their effect on the macroscopic variables such as sediment flux. In this framework, bedload transport is seen as the random motion of particles (sand, gravel, pebbles...) over a two-dimensional surface (the river bed). The number of particles in motion, as well as their velocities, are random variables. In this talk, we show how a simple birth-death Markov model governing particle motion on a regular lattice accurately reproduces the spatio-temporal correlations observed at the macroscopic level. Entrainment, deposition and transport of particles by the turbulent fluid (air or water) are supposed to be independent and memoryless processes that modify the number of particles in motion. By means of the Poisson representation, we obtained a Fokker-Planck equation that is exactly equivalent to the master equation and thus valid for all cell sizes. The analysis shows that the number of moving particles evolves locally far from thermodynamic equilibrium. Several analytical results are presented and compared to experimental data. The index of dispersion (or variance over mean ratio) is proved to grow from unity at small scales to larger values at larger scales confirming the non Poisonnian behavior of bedload transport. Also, we study the one and two dimensional K-function, which gives the average number of moving particles located in a ball centered at a particle centroid function of the ball's radius.
Group association test using a hidden Markov model.
Cheng, Yichen; Dai, James Y; Kooperberg, Charles
2016-04-01
In the genomic era, group association tests are of great interest. Due to the overwhelming number of individual genomic features, the power of testing for association of a single genomic feature at a time is often very small, as are the effect sizes for most features. Many methods have been proposed to test association of a trait with a group of features within a functional unit as a whole, e.g. all SNPs in a gene, yet few of these methods account for the fact that generally a substantial proportion of the features are not associated with the trait. In this paper, we propose to model the association for each feature in the group as a mixture of features with no association and features with non-zero associations to explicitly account for the possibility that a fraction of features may not be associated with the trait while other features in the group are. The feature-level associations are first estimated by generalized linear models; the sequence of these estimated associations is then modeled by a hidden Markov chain. To test for global association, we develop a modified likelihood ratio test based on a log-likelihood function that ignores higher order dependency plus a penalty term. We derive the asymptotic distribution of the likelihood ratio test under the null hypothesis. Furthermore, we obtain the posterior probability of association for each feature, which provides evidence of feature-level association and is useful for potential follow-up studies. In simulations and data application, we show that our proposed method performs well when compared with existing group association tests especially when there are only few features associated with the outcome. PMID:26420797
Markov chain analysis of succession in a rocky subtidal community.
Hill, M Forrest; Witman, Jon D; Caswell, Hal
2004-08-01
We present a Markov chain model of succession in a rocky subtidal community based on a long-term (1986-1994) study of subtidal invertebrates (14 species) at Ammen Rock Pinnacle in the Gulf of Maine. The model describes successional processes (disturbance, colonization, species persistence, and replacement), the equilibrium (stationary) community, and the rate of convergence. We described successional dynamics by species turnover rates, recurrence times, and the entropy of the transition matrix. We used perturbation analysis to quantify the response of diversity to successional rates and species removals. The equilibrium community was dominated by an encrusting sponge (Hymedesmia) and a bryozoan (Crisia eburnea). The equilibrium structure explained 98% of the variance in observed species frequencies. Dominant species have low probabilities of disturbance and high rates of colonization and persistence. On average, species turn over every 3.4 years. Recurrence times varied among species (7-268 years); rare species had the longest recurrence times. The community converged to equilibrium quickly (9.5 years), as measured by Dobrushin's coefficient of ergodicity. The largest changes in evenness would result from removal of the dominant sponge Hymedesmia. Subdominant species appear to increase evenness by slowing the dominance of Hymedesmia. Comparison of the subtidal community with intertidal and coral reef communities revealed that disturbance rates are an order of magnitude higher in coral reef than in rocky intertidal and subtidal communities. Colonization rates and turnover times, however, are lowest and longest in coral reefs, highest and shortest in intertidal communities, and intermediate in subtidal communities. PMID:15278851
A Markov model for NASA's Ground Communications Facility
NASA Technical Reports Server (NTRS)
Adeyemi, O.
1974-01-01
A 'natural' way of constructing finite-state Markov chains (FSMC) is presented for those noise burst channels that can be modeled by them. In particular, a five-state Markov chain is given as a model of errors occurring at the Ground Communications Facility (GCF). A maximum likelihood procedure applicable to any FSMC is developed for estimating all the model parameters starting from the data of error runs. A few of the statistics important for estimating the performance of error control strategies on the channel are provided.
On a Markov chain roulette-type game
NASA Astrophysics Data System (ADS)
El-Shehawey, M. A.; El-Shreef, Gh A.
2009-05-01
A Markov chain on non-negative integers which arises in a roulette-type game is discussed. The transition probabilities are p01 = ρ, pNj = δNj, pi,i+W = q, pi,i-1 = p = 1 - q, 1 <= W < N, 0 <= ρ <= 1, N - W < j <= N and i = 1, 2, ..., N - W. Using formulae for the determinant of a partitioned matrix, a closed form expression for the solution of the Markov chain roulette-type game is deduced. The present analysis is supported by two mathematical models from tumor growth and war with bargaining.
Markov bases and toric ideals for some contingency tables
NASA Astrophysics Data System (ADS)
Mohammed, N. F.; Rakhimov, I. S.; Shitan, M.
2016-06-01
The main objective of this work is to study Markov bases and toric ideals for p/(v -1 )(p -v ) 2 v ×v ×p/v - contingency tables that has fixed two-dimensional marginal when p is a multiple of v and greater than or equal to 2v. Moreover, the connected bipartite graph is also constructed by using elements of Markov basis. This work is an extension on results, that has been found by Hadi and Salman in 2014.
NASA Astrophysics Data System (ADS)
Terashima, Hiroshi; Kawai, Soshi; Koshi, Mitsuo
2011-11-01
We present a formulation for high-order simulations of compressible multicomponent flows using a sixth-order compact differencing scheme and a localized artificial diffusivity. The formulation is designed to satisfy both of pressure and temperature equilibriums at fluid interfaces by introducing additional two equations to the Euler equations. In order to deal with sharp initial condition of density, a localized artificial diffusivity term is introduced to the mass conservation equation. Several one-dimensional problems such as advection of contact and material interfaces and a shock tube problems demonstrate that the present method maintains the pressure and temperature equilibriums and also satisfies the mass conservation property. The localized artificial diffusivity for the mass conservation equation enables to start computations even with severe one-point jump condition, effectively reducing numerical wiggles at the fluid interfaces. Comparisons with a conventional full conservative formulation present the superiority of the present method for preventing spurious pressure/velocity/temperature oscillations at the fluid interfaces. Two-dimensional problems such as the Richtmyer-Meshkov instability demonstrate its multidimensional applicability.
Analysis and design of a second-order digital phase-locked loop
NASA Technical Reports Server (NTRS)
Blasche, P. R.
1979-01-01
A specific second-order digital phase-locked loop (DPLL) was modeled as a first-order Markov chain with alternatives. From the matrix of transition probabilities of the Markov chain, the steady-state phase error of the DPLL was determined. In a similar manner the loop's response was calculated for a fading input. Additionally, a hardware DPLL was constructed and tested to provide a comparison to the results obtained from the Markov chain model. In all cases tested, good agreement was found between the theoretical predictions and the experimental data.
NASA Astrophysics Data System (ADS)
Zimmerling, Jörn; Wei, Lei; Urbach, Paul; Remis, Rob
2016-03-01
We present a Krylov model-order reduction approach to efficiently compute the spontaneous decay (SD) rate of arbitrarily shaped 3D nanosized resonators. We exploit the symmetry of Maxwell's equations to efficiently construct so-called reduced-order models that approximate the SD rate of a quantum emitter embedded in a resonating nanostructure. The models allow for frequency sweeps, meaning that a single model provides SD rate approximations over an entire spectral interval of interest. Field approximations and dominant quasinormal modes can be determined at low cost as well.
2014-01-01
Background The possibility of applying a novel chemometric approach which could allow the differentiation of marble samples, all from different quarries located in the Mediterranean basin and frequently used in ancient times for artistic purposes, was investigated. By suggesting tentative or allowing to rule out unlikely attributions, this kind of differentiation could, indeed, be of valuable support to restorers and other professionals in the field of cultural heritage. Experimental data were obtained only using thermal analytical techniques: Thermogravimetry (TG), Derivative Thermogravimetry (DTG) and Differential Thermal Analysis (DTA). Results The extraction of kinetic parameters from the curves obtained using these thermal analytical techniques allowed Activation Energy values to be evaluated together with the logarithm of the Arrhenius pre-exponential factor of the main TG-DTG process. The main data thus obtained after subsequent chemometric evaluation (using Principal Components Analysis) have already proved useful in the identification the original quarry of a small number of archaeological marble finds. Conclusion One of the most evident advantages of the thermoanalytical – chemometric approach adopted seems to be that it allows the certain identification of an unknown find composed of a marble known to be present among the reference samples considered, that is, contained in the reference file. On the other hand with equal certainty it prevents the occurrence of erroneous or highly uncertain identification if the find being tested does not belong to the reference file considered. PMID:24982691
Rampino, Sergio
2016-07-14
Potential energy surfaces (PESs) for use in dynamics calculations of few-atom reactive systems are commonly modeled as functional forms fitting or interpolating a set of ab initio energies computed at many nuclear configurations. An automated procedure is here proposed for optimal configuration-space sampling in generating this set of energies as part of the grid-empowered molecular simulator GEMS (Laganà et al., J. Grid Comput. 2010, 8, 571-586). The scheme is based on a space-reduced formulation of the so-called bond-order variables allowing for a balanced representation of the attractive and repulsive regions of a diatom configuration space. Uniform grids based on space-reduced bond-order variables are proven to outperform those defined on the more conventional bond-length variables in converging the fitted/interpolated PES to the computed ab initio one with increasing number of grid points. Benchmarks are performed on the one- and three-dimensional prototype systems H2 and H3 using both a local-interpolation (modified Shepard) and a global-fitting (Aguado-Paniagua) scheme. PMID:26674105
NASA Astrophysics Data System (ADS)
Mastrano, A.; Suvorov, A. G.; Melatos, A.
2015-03-01
A recipe is presented to construct an analytic, self-consistent model of a non-barotropic neutron star with a poloidal-toroidal field of arbitrary multipole order, whose toroidal component is confined in a torus around the neutral curve inside the star, as in numerical simulations of twisted tori. The recipe takes advantage of magnetic field aligned coordinates to ensure continuity of the mass density at the surface of the torus. The density perturbation and ellipticity of such a star are calculated in general and for the special case of a mixed dipole-quadrupole field as a worked example. The calculation generalizes previous work restricted to dipolar, poloidal-toroidal and multipolar, poloidal-only configurations. The results are applied, as an example, to magnetars whose observations (e.g. spectral features and pulse modulation) indicate that the internal magnetic fields may be at least one order of magnitude stronger than the external fields, as inferred from their spin-downs, and are not purely dipolar.
NASA Astrophysics Data System (ADS)
Liu, Yanfang; Shan, Jinjun; Gabbert, Ulrich; Qi, Naiming
2013-11-01
A physics-based fractional-order Maxwell resistive capacitor (FOMRC) model is proposed to characterize nonlinear hysteresis and creep behaviors of a piezoelectric actuator (PEA). The Maxwell resistive capacitor (MRC) model is interpreted physically in the electric domain for PEAs. Based on this interpretation, the MRC model is modified to directly describe the relationship between the input voltage and the output displacement of a PEA. Then a procedure is developed to identify the parameters of the MRC model. This procedure is capable of being carried out using the measured input and output of a PEA only. A fractional-order dynamics is integrated into the MRC model to describe the effect of creep, as well as the detachment of hysteresis loops caused by creep. Moreover, the inverse FOMRC model is constructed to compensate for hysteresis and creep in an open-loop positioning application of PEAs. Simulation and experiments are carried out to validate the proposed model. The PEA compensated by the inverse FOMRC model shows an excellent linear behavior.
ERIC Educational Resources Information Center
Lichtenberg, James W.; Hummel, Thomas J.
This investigation tested the hypothesis that the probabilistic structure underlying psychotherapy interviews is Markovian. The "goodness of fit" of a first-order Markov chain model to actual therapy interviews was assessed using a x squared test of homogeneity, and by generating by Monte Carlo methods empirical sampling distributions of selected…
Face Association for Videos Using Conditional Random Fields and Max-Margin Markov Networks.
Du, Ming; Chellappa, Rama
2016-09-01
We address the video-based face association problem, in which one attempts to extract the face tracks of multiple subjects while maintaining label consistency. Traditional tracking algorithms have difficulty in handling this task, especially when challenging nuisance factors like motion blur, low resolution or significant camera motions are present. We demonstrate that contextual features, in addition to face appearance itself, play an important role in this case. We propose principled methods to combine multiple features using Conditional Random Fields and Max-Margin Markov networks to infer labels for the detected faces. Different from many existing approaches, our algorithms work in online mode and hence have a wider range of applications. We address issues such as parameter learning, inference and handling false positves/negatives that arise in the proposed approach. Finally, we evaluate our approach on several public databases. PMID:26552075
Zhong, Xiangnan; He, Haibo; Zhang, Huaguang; Wang, Zhanshan
2014-12-01
In this paper, we develop and analyze an optimal control method for a class of discrete-time nonlinear Markov jump systems (MJSs) with unknown system dynamics. Specifically, an identifier is established for the unknown systems to approximate system states, and an optimal control approach for nonlinear MJSs is developed to solve the Hamilton-Jacobi-Bellman equation based on the adaptive dynamic programming technique. We also develop detailed stability analysis of the control approach, including the convergence of the performance index function for nonlinear MJSs and the existence of the corresponding admissible control. Neural network techniques are used to approximate the proposed performance index function and the control law. To demonstrate the effectiveness of our approach, three simulation studies, one linear case, one nonlinear case, and one single link robot arm case, are used to validate the performance of the proposed optimal control method. PMID:25420238
HMM-DM: identifying differentially methylated regions using a hidden Markov model.
Yu, Xiaoqing; Sun, Shuying
2016-03-01
DNA methylation is an epigenetic modification involved in organism development and cellular differentiation. Identifying differential methylations can help to study genomic regions associated with diseases. Differential methylation studies on single-CG resolution have become possible with the bisulfite sequencing (BS) technology. However, there is still a lack of efficient statistical methods for identifying differentially methylated (DM) regions in BS data. We have developed a new approach named HMM-DM to detect DM regions between two biological conditions using BS data. This new approach first uses a hidden Markov model (HMM) to identify DM CG sites accounting for spatial correlation across CG sites and variation across samples, and then summarizes identified sites into regions. We demonstrate through a simulation study that our approach has a superior performance compared to BSmooth. We also illustrate the application of HMM-DM using a real breast cancer dataset. PMID:26887041
Zhang, Yu-Chen; Zhang, Shao-Wu; Liu, Lian; Liu, Hui; Zhang, Lin; Cui, Xiaodong; Huang, Yufei; Meng, Jia
2015-01-01
With the development of new sequencing technology, the entire N6-methyl-adenosine (m6A) RNA methylome can now be unbiased profiled with methylated RNA immune-precipitation sequencing technique (MeRIP-Seq), making it possible to detect differential methylation states of RNA between two conditions, for example, between normal and cancerous tissue. However, as an affinity-based method, MeRIP-Seq has yet provided base-pair resolution; that is, a single methylation site determined from MeRIP-Seq data can in practice contain multiple RNA methylation residuals, some of which can be regulated by different enzymes and thus differentially methylated between two conditions. Since existing peak-based methods could not effectively differentiate multiple methylation residuals located within a single methylation site, we propose a hidden Markov model (HMM) based approach to address this issue. Specifically, the detected RNA methylation site is further divided into multiple adjacent small bins and then scanned with higher resolution using a hidden Markov model to model the dependency between spatially adjacent bins for improved accuracy. We tested the proposed algorithm on both simulated data and real data. Result suggests that the proposed algorithm clearly outperforms existing peak-based approach on simulated systems and detects differential methylation regions with higher statistical significance on real dataset. PMID:26301253
Controlling influenza disease: Comparison between discrete time Markov chain and deterministic model
NASA Astrophysics Data System (ADS)
Novkaniza, F.; Ivana, Aldila, D.
2016-04-01
Mathematical model of respiratory diseases spread with Discrete Time Markov Chain (DTMC) and deterministic approach for constant total population size are analyzed and compared in this article. Intervention of medical treatment and use of medical mask included in to the model as a constant parameter to controlling influenza spreads. Equilibrium points and basic reproductive ratio as the endemic criteria and it level set depend on some variable are given analytically and numerically as a results from deterministic model analysis. Assuming total of human population is constant from deterministic model, number of infected people also analyzed with Discrete Time Markov Chain (DTMC) model. Since Δt → 0, we could assume that total number of infected people might change only from i to i + 1, i - 1, or i. Approximation probability of an outbreak with gambler's ruin problem will be presented. We find that no matter value of basic reproductive ℛ0, either its larger than one or smaller than one, number of infection will always tends to 0 for t → ∞. Some numerical simulation to compare between deterministic and DTMC approach is given to give a better interpretation and a better understanding about the models results.
Detection of new genes in a bacterial genome using Markov models for three gene classes.
Borodovsky, M; McIninch, J D; Koonin, E V; Rudd, K E; Médigue, C; Danchin, A
1995-09-11
We further investigated the statistical features of the three classes of Escherichia coli genes that have been previously delineated by factorial correspondence analysis and dynamic clustering methods. A phased Markov model for a nucleotide sequence of each gene class was developed and employed for gene prediction using the GeneMark program. The protein-coding region prediction accuracy was determined for class-specific Markov models of different orders when the programs implementing these models were applied to gene sequences from the same or other classes. It is shown that at least two training sets and two program versions derived for different classes of E. coli genes are necessary in order to achieve a high accuracy of coding region prediction for uncharacterized sequences. Some annotated E. coli genes from Class I and Class III are shown to be spurious, whereas many open reading frames (ORFs) that have not been annotated in GenBank as genes are predicted to encode proteins. The amino acid sequences of the putative products of these ORFs initially did not show similarity to already known proteins. However, conserved regions have been identified in several of them by screening the latest entries in protein sequence databases and applying methods for motif search, while some other of these new genes have been identified in independent experiments. PMID:7567469
NASA Astrophysics Data System (ADS)
Zhong, Fuli; Li, Hui; Zhong, Shouming; Zhong, Qishui; Yin, Chun
2015-07-01
A state of charge (SOC) estimation approach based on an adaptive sliding mode observer (SMO) and a fractional order equivalent circuit model (FOECM) for lithium-ion batteries is proposed in this paper. In order to design the adaptive sliding mode observer (SMO) for the SOC estimation, the state equations based on a FOECM of battery are derived. A new self-adjusting strategy for the observer gains is presented to adjust the observer in the estimating process, which helps to reduce chattering and convergence time. Furthermore, a continuous and smooth function called hyperbolic tangent function is applied to balance the chattering affection and the disturbance. At last, a battery simulation model is established to test the SOC estimation performance of the designed SMOs, and the results show the proposed approach is feasible and effective.
NASA Astrophysics Data System (ADS)
Pesquera, L.; Blanco, R.
1987-04-01
The anharmonic oscillator driven by Gaussian noise is studied in the limit of weak damping using the direct perturbation (DPM) and Markov approximation (MAM) methods. Mean values are obtained to first order in the anharmonic coupling constant g. From a careful treatment of the high-frequency behavior it is concluded that to first order in g the DPM takes high-frequency contributions into account whereas the MAM does not, while both agree if high-frequency contributions are not important. It is also shown that both methods give the same results to second order in g for the quartic anharmonic oscillator. The spectral density of the noise used in stochastic electrodynamics is considered as a particular example.
NASA Astrophysics Data System (ADS)
Zarei, Moslem
2016-06-01
In conventional model-independent approaches, the power spectrum of primordial perturbations is characterized by such free parameters as the spectral index, its running, the running of running, and the tensor-to-scalar ratio. In this work we show that, at least for simple inflationary potentials, one can find the primordial scalar and tensor power spectra exactly by resumming over all the running terms. In this model-dependent method, we expand the power spectra about the pivot scale to find the series terms as functions of the e-folding number for some single field models of inflation. Interestingly, for the viable models studied here, one can sum over all the terms and evaluate the exact form of the power spectra. This in turn gives more accurate parametrization of the specific models studied in this work. We finally compare our results with recent cosmic microwave background data to find that our new power spectra are in good agreement with the data.
Hamdi, Naser; Oweis, Rami; Abu Zraiq, Hamzeh; Abu Sammour, Denis
2012-04-01
The effective maintenance management of medical technology influences the quality of care delivered and the profitability of healthcare facilities. Medical equipment maintenance in Jordan lacks an objective prioritization system; consequently, the system is not sensitive to the impact of equipment downtime on patient morbidity and mortality. The current work presents a novel software system (EQUIMEDCOMP) that is designed to achieve valuable improvements in the maintenance management of medical technology. This work-order prioritization model sorts medical maintenance requests by calculating a priority index for each request. Model performance was assessed by utilizing maintenance requests from several Jordanian hospitals. The system proved highly efficient in minimizing equipment downtime based on healthcare delivery capacity, and, consequently, patient outcome. Additionally, a preventive maintenance optimization module and an equipment quality control system are incorporated. The system is, therefore, expected to improve the reliability of medical equipment and significantly improve safety and cost-efficiency. PMID:20703695
NASA Astrophysics Data System (ADS)
Dong, Ming; He, David
2007-07-01
Diagnostics and prognostics are two important aspects in a condition-based maintenance (CBM) program. However, these two tasks are often separately performed. For example, data might be collected and analysed separately for diagnosis and prognosis. This practice increases the cost and reduces the efficiency of CBM and may affect the accuracy of the diagnostic and prognostic results. In this paper, a statistical modelling methodology for performing both diagnosis and prognosis in a unified framework is presented. The methodology is developed based on segmental hidden semi-Markov models (HSMMs). An HSMM is a hidden Markov model (HMM) with temporal structures. Unlike HMM, an HSMM does not follow the unrealistic Markov chain assumption and therefore provides more powerful modelling and analysis capability for real problems. In addition, an HSMM allows modelling the time duration of the hidden states and therefore is capable of prognosis. To facilitate the computation in the proposed HSMM-based diagnostics and prognostics, new forward-backward variables are defined and a modified forward-backward algorithm is developed. The existing state duration estimation methods are inefficient because they require a huge storage and computational load. Therefore, a new approach is proposed for training HSMMs in which state duration probabilities are estimated on the lattice (or trellis) of observations and states. The model parameters are estimated through the modified forward-backward training algorithm. The estimated state duration probability distributions combined with state-changing point detection can be used to predict the useful remaining life of a system. The evaluation of the proposed methodology was carried out through a real world application: health monitoring of hydraulic pumps. In the tests, the recognition rates for all states are greater than 96%. For each individual pump, the recognition rate is increased by 29.3% in comparison with HMMs. Because of the temporal
Some Interesting Characteristics of Markov Chain Transition Matrices.
ERIC Educational Resources Information Center
Egelston, Richard L.
A Monte Carlo investigation of Markov chain matrices was conducted to create empirical distributions for two statistics created from the transition matrices. Curve fitting techniques developed by Karl Pearson were used to deduce if theoretical equations could be fit to the two sets of distributions. The set of distributions which describe the…
Weighted Markov Chains and Graphic State Nodes for Information Retrieval.
ERIC Educational Resources Information Center
Benoit, G.
2002-01-01
Discusses users' search behavior and decision making in data mining and information retrieval. Describes iterative information seeking as a Markov process during which users advance through states of nodes; and explains how the information system records the decision as weights, allowing the incorporation of users' decisions into the Markov…
Inferring parental genomic ancestries using pooled semi-Markov processes
Zou, James Y.; Halperin, Eran; Burchard, Esteban; Sankararaman, Sriram
2015-01-01
Motivation: A basic problem of broad public and scientific interest is to use the DNA of an individual to infer the genomic ancestries of the parents. In particular, we are often interested in the fraction of each parent’s genome that comes from specific ancestries (e.g. European, African, Native American, etc). This has many applications ranging from understanding the inheritance of ancestry-related risks and traits to quantifying human assortative mating patterns. Results: We model the problem of parental genomic ancestry inference as a pooled semi-Markov process. We develop a general mathematical framework for pooled semi-Markov processes and construct efficient inference algorithms for these models. Applying our inference algorithm to genotype data from 231 Mexican trios and 258 Puerto Rican trios where we have the true genomic ancestry of each parent, we demonstrate that our method accurately infers parameters of the semi-Markov processes and parents’ genomic ancestries. We additionally validated the method on simulations. Our model of pooled semi-Markov process and inference algorithms may be of independent interest in other settings in genomics and machine learning. Contact: jazo@microsoft.com PMID:26072482
Markov chain for estimating human mitochondrial DNA mutation pattern
NASA Astrophysics Data System (ADS)
Vantika, Sandy; Pasaribu, Udjianna S.
2015-12-01
The Markov chain was proposed to estimate the human mitochondrial DNA mutation pattern. One DNA sequence was taken randomly from 100 sequences in Genbank. The nucleotide transition matrix and mutation transition matrix were estimated from this sequence. We determined whether the states (mutation/normal) are recurrent or transient. The results showed that both of them are recurrent.
Exploring Mass Perception with Markov Chain Monte Carlo
ERIC Educational Resources Information Center
Cohen, Andrew L.; Ross, Michael G.
2009-01-01
Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…