Exact significance test for Markov order
NASA Astrophysics Data System (ADS)
Pethel, S. D.; Hahs, D. W.
2014-02-01
We describe an exact significance test of the null hypothesis that a Markov chain is nth order. The procedure utilizes surrogate data to yield an exact test statistic distribution valid for any sample size. Surrogate data are generated using a novel algorithm that guarantees, per shot, a uniform sampling from the set of sequences that exactly match the nth order properties of the observed data. Using the test, the Markov order of Tel Aviv rainfall data is examined.
Test to determine the Markov order of a time series.
Racca, E; Laio, F; Poggi, D; Ridolfi, L
2007-01-01
The Markov order of a time series is an important measure of the "memory" of a process, and its knowledge is fundamental for the correct simulation of the characteristics of the process. For this reason, several techniques have been proposed in the past for its estimation. However, most of this methods are rather complex, and often can be applied only in the case of Markov chains. Here we propose a simple and robust test to evaluate the Markov order of a time series. Only the first-order moment of the conditional probability density function characterizing the process is used to evaluate the memory of the process itself. This measure is called the "expected value Markov (EVM) order." We show that there is good agreement between the EVM order and the known Markov order of some synthetic time series.
Building Higher-Order Markov Chain Models with EXCEL
ERIC Educational Resources Information Center
Ching, Wai-Ki; Fung, Eric S.; Ng, Michael K.
2004-01-01
Categorical data sequences occur in many applications such as forecasting, data mining and bioinformatics. In this note, we present higher-order Markov chain models for modelling categorical data sequences with an efficient algorithm for solving the model parameters. The algorithm can be implemented easily in a Microsoft EXCEL worksheet. We give a…
Markov chain order estimation with conditional mutual information
NASA Astrophysics Data System (ADS)
Papapetrou, M.; Kugiumtzis, D.
2013-04-01
We introduce the Conditional Mutual Information (CMI) for the estimation of the Markov chain order. For a Markov chain of K symbols, we define CMI of order m, Ic(m), as the mutual information of two variables in the chain being m time steps apart, conditioning on the intermediate variables of the chain. We find approximate analytic significance limits based on the estimation bias of CMI and develop a randomization significance test of Ic(m), where the randomized symbol sequences are formed by random permutation of the components of the original symbol sequence. The significance test is applied for increasing m and the Markov chain order is estimated by the last order for which the null hypothesis is rejected. We present the appropriateness of CMI-testing on Monte Carlo simulations and compare it to the Akaike and Bayesian information criteria, the maximal fluctuation method (Peres-Shields estimator) and a likelihood ratio test for increasing orders using ϕ-divergence. The order criterion of CMI-testing turns out to be superior for orders larger than one, but its effectiveness for large orders depends on data availability. In view of the results from the simulations, we interpret the estimated orders by the CMI-testing and the other criteria on genes and intergenic regions of DNA chains.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
State space orderings for Gauss-Seidel in Markov chains revisited
Dayar, T.
1996-12-31
Symmetric state space orderings of a Markov chain may be used to reduce the magnitude of the subdominant eigenvalue of the (Gauss-Seidel) iteration matrix. Orderings that maximize the elemental mass or the number of nonzero elements in the dominant term of the Gauss-Seidel splitting (that is, the term approximating the coefficient matrix) do not necessarily converge faster. An ordering of a Markov chain that satisfies Property-R is semi-convergent. On the other hand, there are semi-convergent symmetric state space orderings that do not satisfy Property-R. For a given ordering, a simple approach for checking Property-R is shown. An algorithm that orders the states of a Markov chain so as to increase the likelihood of satisfying Property-R is presented. The computational complexity of the ordering algorithm is less than that of a single Gauss-Seidel iteration (for sparse matrices). In doing all this, the aim is to gain an insight for faster converging orderings. Results from a variety of applications improve the confidence in the algorithm.
Kinetics and thermodynamics of first-order Markov chain copolymerization
NASA Astrophysics Data System (ADS)
Gaspard, P.; Andrieux, D.
2014-07-01
We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer.
First and second order semi-Markov chains for wind speed modeling
NASA Astrophysics Data System (ADS)
Prattico, F.; Petroni, F.; D'Amico, G.
2012-04-01
Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. Semi-Markov processes (SMP) are a wide class of stochastic processes which generalize at the same time both Markov chains and renewal processes. Their main advantage is that of using whatever type of waiting time distribution for modeling the time to have a transition from one state to another one. This major flexibility has a price to pay: availability of data to estimate the parameters of the model which are more numerous. Data availability is not an issue in wind speed studies, therefore, semi-Markov models can be used in a statistical efficient way. In this work we present three different semi-Markov chain models: the first one is a first-order SMP where the transition probabilities from two speed states (at time Tn and Tn-1) depend on the initial state (the state at Tn-1), final state (the state at Tn) and on the waiting time (given by t=Tn-Tn-1), the second model is a second order SMP where we consider the transition probabilities as depending also on the state the wind speed was before the initial state (which is the state at Tn-2) and the last one is still a second order SMP where the transition probabilities depends on the three states at Tn-2,Tn-1 and Tn and on the waiting times t_1=Tn-1-Tn-2 and t_2=Tn-Tn-1. The three models are used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling
Post processing with first- and second-order hidden Markov models
NASA Astrophysics Data System (ADS)
Taghva, Kazem; Poudel, Srijana; Malreddy, Spandana
2013-01-01
In this paper, we present the implementation and evaluation of first order and second order Hidden Markov Models to identify and correct OCR errors in the post processing of books. Our experiments show that the first order model approximately corrects 10% of the errors with 100% precision, while the second order model corrects a higher percentage of errors with much lower precision.
Post processing of optically recognized text via second order hidden Markov model
NASA Astrophysics Data System (ADS)
Poudel, Srijana
In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.
Deciding when to intervene: a Markov decision process approach.
Magni, P; Quaglini, S; Marchetti, M; Barosi, G
2000-12-01
The aim of this paper is to point out the difference between static and dynamic approaches to choosing the optimal time for intervention. The paper demonstrates that classical approaches, such as decision trees and influence diagrams, hardly cope with dynamic problems: they cannot simulate all the real-world strategies and consequently can only calculate suboptimal solutions. A dynamic formalism based on Markov decision processes (MPPs) is then proposed and applied to a medical problem: the prophylactic surgery in mild hereditary spherocytosis. The paper compares the proposed approach with a static approach on the same medical problem. The policy provided by the dynamic approach achieved significant gain over the static policy by delaying the intervention time in some categories of patients. The calculations are carried out with DT-Planner, a graphical decision aid specifically built for dealing with dynamic decision processes.
Heisenberg picture approach to the stability of quantum Markov systems
NASA Astrophysics Data System (ADS)
Pan, Yu; Amini, Hadis; Miao, Zibo; Gough, John; Ugrinovskii, Valery; James, Matthew R.
2014-06-01
Quantum Markovian systems, modeled as unitary dilations in the quantum stochastic calculus of Hudson and Parthasarathy, have become standard in current quantum technological applications. This paper investigates the stability theory of such systems. Lyapunov-type conditions in the Heisenberg picture are derived in order to stabilize the evolution of system operators as well as the underlying dynamics of the quantum states. In particular, using the quantum Markov semigroup associated with this quantum stochastic differential equation, we derive sufficient conditions for the existence and stability of a unique and faithful invariant quantum state. Furthermore, this paper proves the quantum invariance principle, which extends the LaSalle invariance principle to quantum systems in the Heisenberg picture. These results are formulated in terms of algebraic constraints suitable for engineering quantum systems that are used in coherent feedback networks.
Heisenberg picture approach to the stability of quantum Markov systems
Pan, Yu E-mail: zibo.miao@anu.edu.au; Miao, Zibo E-mail: zibo.miao@anu.edu.au; Amini, Hadis; Gough, John; Ugrinovskii, Valery; James, Matthew R.
2014-06-15
Quantum Markovian systems, modeled as unitary dilations in the quantum stochastic calculus of Hudson and Parthasarathy, have become standard in current quantum technological applications. This paper investigates the stability theory of such systems. Lyapunov-type conditions in the Heisenberg picture are derived in order to stabilize the evolution of system operators as well as the underlying dynamics of the quantum states. In particular, using the quantum Markov semigroup associated with this quantum stochastic differential equation, we derive sufficient conditions for the existence and stability of a unique and faithful invariant quantum state. Furthermore, this paper proves the quantum invariance principle, which extends the LaSalle invariance principle to quantum systems in the Heisenberg picture. These results are formulated in terms of algebraic constraints suitable for engineering quantum systems that are used in coherent feedback networks.
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-01-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581
Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus
2014-01-01
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work.
Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus
2014-01-01
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work. PMID:25013937
Jump Markov models and transition state theory: the quasi-stationary distribution approach.
Di Gesù, Giacomo; Lelièvre, Tony; Le Peutrec, Dorian; Nectoux, Boris
2016-12-22
We are interested in the connection between a metastable continuous state space Markov process (satisfying e.g. the Langevin or overdamped Langevin equation) and a jump Markov process in a discrete state space. More precisely, we use the notion of quasi-stationary distribution within a metastable state for the continuous state space Markov process to parametrize the exit event from the state. This approach is useful to analyze and justify methods which use the jump Markov process underlying a metastable dynamics as a support to efficiently sample the state-to-state dynamics (accelerated dynamics techniques). Moreover, it is possible by this approach to quantify the error on the exit event when the parametrization of the jump Markov model is based on the Eyring-Kramers formula. This therefore provides a mathematical framework to justify the use of transition state theory and the Eyring-Kramers formula to build kinetic Monte Carlo or Markov state models.
NASA Astrophysics Data System (ADS)
Vaglica, Gabriella; Lillo, Fabrizio; Mantegna, Rosario N.
2010-07-01
Large trades in a financial market are usually split into smaller parts and traded incrementally over extended periods of time. We address these large trades as hidden orders. In order to identify and characterize hidden orders, we fit hidden Markov models to the time series of the sign of the tick-by-tick inventory variation of market members of the Spanish Stock Exchange. Our methodology probabilistically detects trading sequences, which are characterized by a significant majority of buy or sell transactions. We interpret these patches of sequential buying or selling transactions as proxies of the traded hidden orders. We find that the time, volume and number of transaction size distributions of these patches are fat tailed. Long patches are characterized by a large fraction of market orders and a low participation rate, while short patches have a large fraction of limit orders and a high participation rate. We observe the existence of a buy-sell asymmetry in the number, average length, average fraction of market orders and average participation rate of the detected patches. The detected asymmetry is clearly dependent on the local market trend. We also compare the hidden Markov model patches with those obtained with the segmentation method used in Vaglica et al (2008 Phys. Rev. E 77 036110), and we conclude that the former ones can be interpreted as a partition of the latter ones.
Modeling anomalous radar propagation using first-order two-state Markov chains
NASA Astrophysics Data System (ADS)
Haddad, B.; Adane, A.; Mesnard, F.; Sauvageot, H.
In this paper, it is shown that radar echoes due to anomalous propagations (AP) can be modeled using Markov chains. For this purpose, images obtained in southwestern France by means of an S-band meteorological radar recorded every 5 min in 1996 were considered. The daily mean surfaces of AP appearing in these images are sorted into two states and their variations are then represented by a binary random variable. The Markov transition matrix, the 1-day-lag autocorrelation coefficient as well as the long-term probability of having each of both states are calculated on a monthly basis. The same kind of modeling was also applied to the rainfall observed in the radar dataset under study. The first-order two-state Markov chains are then found to fit the daily variations of either AP or rainfall areas very well. For each month of the year, the surfaces filled by both types of echo follow similar stochastic distributions, but their autocorrelation coefficient is different. Hence, it is suggested that this coefficient is a discriminant factor which could be used, among other criteria, to improve the identification of AP in radar images.
A Bayesian Hidden Markov Model-based approach for anomaly detection in electronic systems
NASA Astrophysics Data System (ADS)
Dorj, E.; Chen, C.; Pecht, M.
Early detection of anomalies in any system or component prevents impending failures and enhances performance and availability. The complex architecture of electronics, the interdependency of component functionalities, and the miniaturization of most electronic systems make it difficult to detect and analyze anomalous behaviors. A Hidden Markov Model-based classification technique determines unobservable hidden behaviors of complex and remotely inaccessible electronic systems using observable signals. This paper presents a data-driven approach for anomaly detection in electronic systems based on a Bayesian Hidden Markov Model classification technique. The posterior parameters of the Hidden Markov Models are estimated using the conjugate prior method. An application of the developed Bayesian Hidden Markov Model-based anomaly detection approach is presented for detecting anomalous behavior in Insulated Gate Bipolar Transistors using experimental data. The detection results illustrate that the developed anomaly detection approach can help detect anomalous behaviors in electronic systems, which can help prevent system downtime and catastrophic failures.
A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes
NASA Technical Reports Server (NTRS)
Carpenter, Russell; Lee, Taesul
2008-01-01
Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.
The College Completion Puzzle: A Hidden Markov Model Approach
ERIC Educational Resources Information Center
Witteveen, Dirk; Attewell, Paul
2017-01-01
Higher education in America is characterized by widespread access to college but low rates of completion, especially among undergraduates at less selective institutions. We analyze longitudinal transcript data to examine processes leading to graduation, using Hidden Markov modeling. We identify several latent states that are associated with…
2014-09-20
Eric S. Kim , Samuel Coogan, S. Shankar Sastry , Sanjit A. Seshia Abstract— We propose to synthesize a control policy for a Markov decision process (MDP...A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications Dorsa Sadigh Eric Kim Samuel...Coogan S. Shankar Sastry Sanjit A. Seshia Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS
A Hidden Markov Approach to Modeling Interevent Earthquake Times
NASA Astrophysics Data System (ADS)
Chambers, D.; Ebel, J. E.; Kafka, A. L.; Baglivo, J.
2003-12-01
A hidden Markov process, in which the interevent time distribution is a mixture of exponential distributions with different rates, is explored as a model for seismicity that does not follow a Poisson process. In a general hidden Markov model, one assumes that a system can be in any of a finite number k of states and there is a random variable of interest whose distribution depends on the state in which the system resides. The system moves probabilistically among the states according to a Markov chain; that is, given the history of visited states up to the present, the conditional probability that the next state is a specified one depends only on the present state. Thus the transition probabilities are specified by a k by k stochastic matrix. Furthermore, it is assumed that the actual states are unobserved (hidden) and that only the values of the random variable are seen. From these values, one wishes to estimate the sequence of states, the transition probability matrix, and any parameters used in the state-specific distributions. The hidden Markov process was applied to a data set of 110 interevent times for earthquakes in New England from 1975 to 2000. Using the Baum-Welch method (Baum et al., Ann. Math. Statist. 41, 164-171), we estimate the transition probabilities, find the most likely sequence of states, and estimate the k means of the exponential distributions. Using k=2 states, we found the data were fit well by a mixture of two exponential distributions, with means of approximately 5 days and 95 days. The steady state model indicates that after approximately one fourth of the earthquakes, the waiting time until the next event had the first exponential distribution and three fourths of the time it had the second. Three and four state models were also fit to the data; the data were inconsistent with a three state model but were well fit by a four state model.
A second-order Markov process for modeling diffusive motion through spatial discretization.
Sant, Marco; Papadopoulos, George K; Theodorou, Doros N
2008-01-14
A new "mesoscopic" stochastic model has been developed to describe the diffusive behavior of a system of particles at equilibrium. The model is based on discretizing space into slabs by drawing equispaced parallel planes along a coordinate direction. A central role is played by the probability that a particle exits a slab via the face opposite to the one through which it entered (transmission probability), as opposed to exiting via the same face through which it entered (reflection probability). A simple second-order Markov process invoking this probability is developed, leading to an expression for the self-diffusivity, applicable for large slab widths, consistent with a continuous formulation of diffusional motion. This model is validated via molecular dynamics simulations in a bulk system of soft spheres across a wide range of densities.
A Fuzzy Markov approach for assessing groundwater pollution potential for landfill siting.
Chen, Wei-Yea; Kao, Jehng-Jung
2002-04-01
This study presents a Fuzzy Markov groundwater pollution potential assessment approach to facilitate landfill siting analysis. Landfill siting is constrained by various regulations and is complicated by the uncertainty of groundwater related factors. The conventional static rating method cannot properly depict the potential impact of pollution on a groundwater table because the groundwater table level fluctuates. A Markov chain model is a dynamic model that can be viewed as a hybrid of probability and matrix models. The probability matrix of the Markov chain model is determined based on the groundwater table elevation time series. The probability reflects the likelihood of the groundwater table changing between levels. A fuzzy set method is applied to estimate the degree of pollution potential, and a case study demonstrates the applicability of the proposed approach. The short- and long-term pollution potential information provided by the proposed approach is expected to enhance landfill siting decisions.
Mikhailov, I.D.; Zhuravskii, L.V.
1987-11-01
A method is proposed for calculating the vibrational-state density averaged over all configurations for a polymer chain with Markov disorder. The method is based on using a group of centrally symmetric gauge transformations that reduce the dynamic matrix for along polymer chain to renormalized dynamic matrices for short fragments. The short-range order is incorporated exactly in the averaging procedure, while the long-range order is incorporated in the self-consistent field approximation. Results are given for a simple skeletal model for a polymer containing tacticity deviations of Markov type.
Neale, Michael C.; Clark, Shaunna L.; Dolan, Conor V.; Hunter, Michael D.
2015-01-01
A linear latent growth curve mixture model with regime switching is extended in 2 ways. Previously, the matrix of first-order Markov switching probabilities was specified to be time-invariant, regardless of the pair of occasions being considered. The first extension, time-varying transitions, specifies different Markov transition matrices between each pair of occasions. The second extension is second-order time-invariant Markov transition probabilities, such that the probability of switching depends on the states at the 2 previous occasions. The models are implemented using the R package OpenMx, which facilitates data handling, parallel computation, and further model development. It also enables the extraction and display of relative likelihoods for every individual in the sample. The models are illustrated with previously published data on alcohol use observed on 4 occasions as part of the National Longitudinal Survey of Youth, and demonstrate improved fit to the data. PMID:26924921
Compound extremes in a changing climate - a Markov chain approach
NASA Astrophysics Data System (ADS)
Sedlmeier, Katrin; Mieruch, Sebastian; Schädler, Gerd; Kottmeier, Christoph
2016-11-01
Studies using climate models and observed trends indicate that extreme weather has changed and may continue to change in the future. The potential impact of extreme events such as heat waves or droughts depends not only on their number of occurrences but also on "how these extremes occur", i.e., the interplay and succession of the events. These quantities are quite unexplored, for past changes as well as for future changes and call for sophisticated methods of analysis. To address this issue, we use Markov chains for the analysis of the dynamics and succession of multivariate or compound extreme events. We apply the method to observational data (1951-2010) and an ensemble of regional climate simulations for central Europe (1971-2000, 2021-2050) for two types of compound extremes, heavy precipitation and cold in winter and hot and dry days in summer. We identify three regions in Europe, which turned out to be likely susceptible to a future change in the succession of heavy precipitation and cold in winter, including a region in southwestern France, northern Germany and in Russia around Moscow. A change in the succession of hot and dry days in summer can be expected for regions in Spain and Bulgaria. The susceptibility to a dynamic change of hot and dry extremes in the Russian region will probably decrease.
Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model
NASA Astrophysics Data System (ADS)
Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.
2009-04-01
The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.Â The other method used the certainty...
A Markov Chain Approach to Probabilistic Swarm Guidance
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Bayard, David S.
2012-01-01
This paper introduces a probabilistic guidance approach for the coordination of swarms of autonomous agents. The main idea is to drive the swarm to a prescribed density distribution in a prescribed region of the configuration space. In its simplest form, the probabilistic approach is completely decentralized and does not require communication or collabo- ration between agents. Agents make statistically independent probabilistic decisions based solely on their own state, that ultimately guides the swarm to the desired density distribution in the configuration space. In addition to being completely decentralized, the probabilistic guidance approach has a novel autonomous self-repair property: Once the desired swarm density distribution is attained, the agents automatically repair any damage to the distribution without collaborating and without any knowledge about the damage.
Markov-chain approach to the distribution of ancestors in species of biparental reproduction
NASA Astrophysics Data System (ADS)
Caruso, M.; Jarne, C.
2014-08-01
We studied how to obtain a distribution for the number of ancestors in species of sexual reproduction. Present models concentrate on the estimation of distributions repetitions of ancestors in genealogical trees. It has been shown that it is not possible to reconstruct the genealogical history of each species along all its generations by means of a geometric progression. This analysis demonstrates that it is possible to rebuild the tree of progenitors by modeling the problem with a Markov chain. For each generation, the maximum number of possible ancestors is different. This presents huge problems for the resolution. We found a solution through a dilation of the sample space, although the distribution defined there takes smaller values with respect to the initial problem. In order to correct the distribution for each generation, we introduced the invariance under a gauge (local) group of dilations. These ideas can be used to study the interaction of several processes and provide a new approach on the problem of the common ancestor. In the same direction, this model also provides some elements that can be used to improve models of animal reproduction.
Yang, Wei-Feng; Yu, Zu-Guo; Anh, Vo
2016-03-01
Traditional methods for sequence comparison and phylogeny reconstruction rely on pair wise and multiple sequence alignments. But alignment could not be directly applied to whole genome/proteome comparison and phylogenomic studies due to their high computational complexity. Hence alignment-free methods became popular in recent years. Here we propose a fast alignment-free method for whole genome/proteome comparison and phylogeny reconstruction using higher order Markov model and chaos game representation. In the present method, we use the transition matrices of higher order Markov models to characterize amino acid or DNA sequences for their comparison. The order of the Markov model is uniquely identified by maximizing the average Shannon entropy of conditional probability distributions. Using one-dimensional chaos game representation and linked list, this method can reduce large memory and time consumption which is due to the large-scale conditional probability distributions. To illustrate the effectiveness of our method, we employ it for fast phylogeny reconstruction based on genome/proteome sequences of two species data sets used in previous published papers. Our results demonstrate that the present method is useful and efficient. The source codes for our algorithm to get the distance matrix and genome/proteome sequences can be downloaded from ftp://121.199.20.25/. The software Phylip and EvolView we used to construct phylogenetic trees can be referred from their websites. Copyright © 2015 Elsevier Inc. All rights reserved.
A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains
NASA Astrophysics Data System (ADS)
Gan, Tingyue; Cameron, Maria
2017-06-01
Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.
A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains
NASA Astrophysics Data System (ADS)
Gan, Tingyue; Cameron, Maria
2017-01-01
Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.
A Statistical Multiresolution Approach for Face Recognition Using Structural Hidden Markov Models
NASA Astrophysics Data System (ADS)
Nicholl, P.; Amira, A.; Bouchaffra, D.; Perrott, R. H.
2007-12-01
This paper introduces a novel methodology that combines the multiresolution feature of the discrete wavelet transform (DWT) with the local interactions of the facial structures expressed through the structural hidden Markov model (SHMM). A range of wavelet filters such as Haar, biorthogonal 9/7, and Coiflet, as well as Gabor, have been implemented in order to search for the best performance. SHMMs perform a thorough probabilistic analysis of any sequential pattern by revealing both its inner and outer structures simultaneously. Unlike traditional HMMs, the SHMMs do not perform the state conditional independence of the visible observation sequence assumption. This is achieved via the concept of local structures introduced by the SHMMs. Therefore, the long-range dependency problem inherent to traditional HMMs has been drastically reduced. SHMMs have not previously been applied to the problem of face identification. The results reported in this application have shown that SHMM outperforms the traditional hidden Markov model with a 73% increase in accuracy.
2007-02-01
true alarms from false positives . At the host-level, a new anomaly detection mechanism operating that employs non-stationary Markov models is proposed....mitigate false positives, network based correlation of collected anomalies from different hosts is suggested, as well as a new means of host-based anomaly ... detection . The concept of anomaly propagation is based on the premise that false alarms do not propagate within the network. Unless anomaly
Effective degree Markov-chain approach for discrete-time epidemic processes on uncorrelated networks
NASA Astrophysics Data System (ADS)
Cai, Chao-Ran; Wu, Zhi-Xi; Guan, Jian-Yue
2014-11-01
Recently, Gómez et al. proposed a microscopic Markov-chain approach (MMCA) [S. Gómez, J. Gómez-Gardeñes, Y. Moreno, and A. Arenas, Phys. Rev. E 84, 036105 (2011), 10.1103/PhysRevE.84.036105] to the discrete-time susceptible-infected-susceptible (SIS) epidemic process and found that the epidemic prevalence obtained by this approach agrees well with that by simulations. However, we found that the approach cannot be straightforwardly extended to a susceptible-infected-recovered (SIR) epidemic process (due to its irreversible property), and the epidemic prevalences obtained by MMCA and Monte Carlo simulations do not match well when the infection probability is just slightly above the epidemic threshold. In this contribution we extend the effective degree Markov-chain approach, proposed for analyzing continuous-time epidemic processes [J. Lindquist, J. Ma, P. Driessche, and F. Willeboordse, J. Math. Biol. 62, 143 (2011), 10.1007/s00285-010-0331-2], to address discrete-time binary-state (SIS) or three-state (SIR) epidemic processes on uncorrelated complex networks. It is shown that the final epidemic size as well as the time series of infected individuals obtained from this approach agree very well with those by Monte Carlo simulations. Our results are robust to the change of different parameters, including the total population size, the infection probability, the recovery probability, the average degree, and the degree distribution of the underlying networks.
Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng
2012-01-01
This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.
A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings
Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun
2017-01-01
The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088
Medical Inpatient Journey Modeling and Clustering: A Bayesian Hidden Markov Model Based Approach
Huang, Zhengxing; Dong, Wei; Wang, Fei; Duan, Huilong
2015-01-01
Modeling and clustering medical inpatient journeys is useful to healthcare organizations for a number of reasons including inpatient journey reorganization in a more convenient way for understanding and browsing, etc. In this study, we present a probabilistic model-based approach to model and cluster medical inpatient journeys. Specifically, we exploit a Bayesian Hidden Markov Model based approach to transform medical inpatient journeys into a probabilistic space, which can be seen as a richer representation of inpatient journeys to be clustered. Then, using hierarchical clustering on the matrix of similarities, inpatient journeys can be clustered into different categories w.r.t their clinical and temporal characteristics. We evaluated the proposed approach on a real clinical data set pertaining to the unstable angina treatment process. The experimental results reveal that our method can identify and model latent treatment topics underlying in personalized inpatient journeys, and yield impressive clustering quality. PMID:26958200
Robust Filtering for Nonlinear Nonhomogeneous Markov Jump Systems by Fuzzy Approximation Approach.
Yin, Yanyan; Shi, Peng; Liu, Fei; Teo, Kok Lay; Lim, Cheng-Chew
2015-09-01
This paper addresses the problem of robust fuzzy L2-L∞ filtering for a class of uncertain nonlinear discrete-time Markov jump systems (MJSs) with nonhomogeneous jump processes. The Takagi-Sugeno fuzzy model is employed to represent such nonlinear nonhomogeneous MJS with norm-bounded parameter uncertainties. In order to decrease conservation, a polytope Lyapunov function which evolves as a convex function is employed, and then, under the designed mode-dependent and variation-dependent fuzzy filter which includes the membership functions, a sufficient condition is presented to ensure that the filtering error dynamic system is stochastically stable and that it has a prescribed L2-L∞ performance index. Two simulated examples are given to demonstrate the effectiveness and advantages of the proposed techniques.
On the reliability of NMR relaxation data analyses: a Markov Chain Monte Carlo approach.
Abergel, Daniel; Volpato, Andrea; Coutant, Eloi P; Polimeno, Antonino
2014-09-01
The analysis of NMR relaxation data is revisited along the lines of a Bayesian approach. Using a Markov Chain Monte Carlo strategy of data fitting, we investigate conditions under which relaxation data can be effectively interpreted in terms of internal dynamics. The limitations to the extraction of kinetic parameters that characterize internal dynamics are analyzed, and we show that extracting characteristic time scales shorter than a few tens of ps is very unlikely. However, using MCMC methods, reliable estimates of the marginal probability distributions and estimators (average, standard deviations, etc.) can still be obtained for subsets of the model parameters. Thus, unlike more conventional strategies of data analysis, the method avoids a model selection process. In addition, it indicates what information may be extracted from the data, but also what cannot.
A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning
NASA Astrophysics Data System (ADS)
Roth, John; Tummala, Murali; McEachen, John
2016-09-01
This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.
Segmentation of angiodysplasia lesions in WCE images using a MAP approach with Markov Random Fields.
Vieira, Pedro M; Goncalves, Bruno; Goncalves, Carla R; Lima, Carlos S
2016-08-01
This paper deals with the segmentation of angiodysplasias in wireless capsule endoscopy images. These lesions are the cause of almost 10% of all gastrointestinal bleeding episodes, and its detection using the available software presents low sensitivity. This work proposes an automatic selection of a ROI using an image segmentation module based on the MAP approach where an accelerated version of the EM algorithm is used to iteratively estimate the model parameters. Spatial context is modeled in the prior probability density function using Markov Random Fields. The color space used was CIELab, specially the a component, which highlighted most these type of lesions. The proposed method is the first regarding this specific type of lesions, but when compared to other state-of-the-art segmentation methods, it almost doubles the results.
Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng
2012-01-01
This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model. PMID:22438734
ERIC Educational Resources Information Center
Stifter, Cynthia A.; Rovine, Michael
2015-01-01
The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…
ERIC Educational Resources Information Center
Stifter, Cynthia A.; Rovine, Michael
2015-01-01
The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…
Seifert, Michael; Gohr, André; Strickert, Marc; Grosse, Ivo
2012-01-01
Array-based comparative genomic hybridization (Array-CGH) is an important technology in molecular biology for the detection of DNA copy number polymorphisms between closely related genomes. Hidden Markov Models (HMMs) are popular tools for the analysis of Array-CGH data, but current methods are only based on first-order HMMs having constrained abilities to model spatial dependencies between measurements of closely adjacent chromosomal regions. Here, we develop parsimonious higher-order HMMs enabling the interpolation between a mixture model ignoring spatial dependencies and a higher-order HMM exhaustively modeling spatial dependencies. We apply parsimonious higher-order HMMs to the analysis of Array-CGH data of the accessions C24 and Col-0 of the model plant Arabidopsis thaliana. We compare these models against first-order HMMs and other existing methods using a reference of known deletions and sequence deviations. We find that parsimonious higher-order HMMs clearly improve the identification of these polymorphisms. Moreover, we perform a functional analysis of identified polymorphisms revealing novel details of genomic differences between C24 and Col-0. Additional model evaluations are done on widely considered Array-CGH data of human cell lines indicating that parsimonious HMMs are also well-suited for the analysis of non-plant specific data. All these results indicate that parsimonious higher-order HMMs are useful for Array-CGH analyses. An implementation of parsimonious higher-order HMMs is available as part of the open source Java library Jstacs (www.jstacs.de/index.php/PHHMM). PMID:22253580
Photochemical approaches to ordered polymers
NASA Technical Reports Server (NTRS)
Meador, Michael A.; Abdulaziz, Mahmoud; Meador, Mary Ann B.
1990-01-01
The photocyclization of o-benzyloxyphenyl ketone chromophores provides an efficient, high yield route to the synthesis of 2,3-diphenylbenzofurans. The synthesis and solution of photochemistry of a series of polymers containing this chromophore is described. The photocuring of these polymers is a potential new approach to the synthesis of highly conjugated polymers based upon a p-phenylene bisbenzofuran repeat unit.
Nonlinear response theory for Markov processes. II. Fifth-order response functions
NASA Astrophysics Data System (ADS)
Diezemann, Gregor
2017-08-01
The nonlinear response of stochastic models obeying a master equation is calculated up to fifth order in the external field, thus extending the third-order results obtained earlier [G. Diezemann, Phys. Rev. E 85, 051502 (2012), 10.1103/PhysRevE.85.051502]. For sinusoidal fields the 5 ω component of the susceptibility is computed for the model of dipole reorientations in an asymmetric double well potential and for a trap model with a Gaussian density of states. For most realizations of the models a hump is found in the higher-order susceptibilities. In particular, for the asymmetric double well potential model there are two characteristic temperature regimes showing the occurrence of such a hump as compared to a single characteristic regime in the case of the third-order response. In the case of the trap model the results strongly depend on the variable coupled to the field. As for the third-order response, the low-frequency limit of the susceptibility plays a crucial role with respect to the occurrence of a hump. The findings are discussed in light of recent experimental results obtained for supercooled liquids. The differences found for the third-order and the fifth-order response indicate that nonlinear response functions might serve as a powerful tool to discriminate among the large number of existing models for glassy relaxation.
Semi-Markov Arnason-Schwarz models.
King, Ruth; Langrock, Roland
2016-06-01
We consider multi-state capture-recapture-recovery data where observed individuals are recorded in a set of possible discrete states. Traditionally, the Arnason-Schwarz model has been fitted to such data where the state process is modeled as a first-order Markov chain, though second-order models have also been proposed and fitted to data. However, low-order Markov models may not accurately represent the underlying biology. For example, specifying a (time-independent) first-order Markov process involves the assumption that the dwell time in each state (i.e., the duration of a stay in a given state) has a geometric distribution, and hence that the modal dwell time is one. Specifying time-dependent or higher-order processes provides additional flexibility, but at the expense of a potentially significant number of additional model parameters. We extend the Arnason-Schwarz model by specifying a semi-Markov model for the state process, where the dwell-time distribution is specified more generally, using, for example, a shifted Poisson or negative binomial distribution. A state expansion technique is applied in order to represent the resulting semi-Markov Arnason-Schwarz model in terms of a simpler and computationally tractable hidden Markov model. Semi-Markov Arnason-Schwarz models come with only a very modest increase in the number of parameters, yet permit a significantly more flexible state process. Model selection can be performed using standard procedures, and in particular via the use of information criteria. The semi-Markov approach allows for important biological inference to be drawn on the underlying state process, for example, on the times spent in the different states. The feasibility of the approach is demonstrated in a simulation study, before being applied to real data corresponding to house finches where the states correspond to the presence or absence of conjunctivitis. © 2015, The International Biometric Society.
An information theoretic approach for generating an aircraft avoidance Markov Decision Process
NASA Astrophysics Data System (ADS)
Weinert, Andrew J.
Developing a collision avoidance system that can meet safety standards required of commercial aviation is challenging. A dynamic programming approach to collision avoidance has been developed to optimize and generate logics that are robust to the complex dynamics of the national airspace. The current approach represents the aircraft avoidance problem as Markov Decision Processes and independently optimizes a horizontal and vertical maneuver avoidance logics. This is a result of the current memory requirements for each logic, simply combining the logics will result in a significantly larger representation. The "curse of dimensionality" makes it computationally inefficient and unfeasible to optimize this larger representation. However, existing and future collision avoidance systems have mostly defined the decision process by hand. In response, a simulation-based framework was built to better understand how each potential state quantifies the aircraft avoidance problem with regards to safety and operational components. The framework leverages recent advances in signals processing and database, while enabling the highest fidelity analysis of Monte Carlo aircraft encounter simulations to date. This framework enabled the calculation of how well each state of the decision process quantifies the collision risk and the associated memory requirements. Using this analysis, a collision avoidance logic that leverages both horizontal and vertical actions was built and optimized using this simulation based approach.
Sample size estimation for pilot animal experiments by using a Markov Chain Monte Carlo approach.
Allgoewer, Andreas; Mayer, Benjamin
2017-05-01
The statistical determination of sample size is mandatory when planning animal experiments, but it is usually difficult to implement appropriately. The main reason is that prior information is hardly ever available, so the assumptions made cannot be verified reliably. This is especially true for pilot experiments. Statistical simulation might help in these situations. We used a Markov Chain Monte Carlo (MCMC) approach to verify the pragmatic assumptions made on different distribution parameters used for power and sample size calculations in animal experiments. Binomial and normal distributions, which are the most frequent distributions in practice, were simulated for categorical and continuous endpoints, respectively. The simulations showed that the common practice of using five or six animals per group for continuous endpoints is reasonable. Even in the case of small effect sizes, the statistical power would be sufficiently large (≥ 80%). For categorical outcomes, group sizes should never be under eight animals, otherwise a sufficient statistical power cannot be guaranteed. This applies even in the case of large effects. The MCMC approach demonstrated to be a useful method for calculating sample size in animal studies that lack prior data. Of course, the simulation results particularly depend on the assumptions made with regard to the distributional properties and effects to be detected, but the same also holds in situations where prior data are available. MCMC is therefore a promising approach toward the more informed planning of pilot research experiments involving the use of animals. 2017 FRAME.
Use of a Transition Probability/Markov Approach to Improve Geostatistical of Facies Architecture
Carle, S.F.
2000-11-01
Facies may account for the largest permeability contrasts within the reservoir model at the scale relevant to production. Conditional simulation of the spatial distribution of facies is one of the most important components of building a reservoir model. Geostatistical techniques are widely used to produce realistic and geologically plausible realizations of facies architecture. However, there are two stumbling blocks to the traditional indicator variogram-based approaches: (1) intensive data sets are needed to develop models of spatial variability by empirical curve-fitting to sample indicator (cross-) variograms and to implement ''post-processing'' simulation algorithms; and (2) the prevalent ''sequential indicator simulation'' (SIS) methods do not accurately produce patterns of spatial variability for systems with three or more facies (Seifert and Jensen, 1999). This paper demonstrates an alternative transition probability/Markov approach that emphasizes: (1) Conceptual understanding of the parameters of the spatial variability model, so that geologic insight can support and enhance model development when data are sparse. (2) Mathematical rigor, so that the ''coregionalization'' model (including the spatial cross-correlations) obeys probability law. (3) Consideration of spatial cross-correlation, so that juxtapositional tendencies--how frequently one facies tends to occur adjacent to another facies--are honored.
Seifert, Michael; Abou-El-Ardat, Khalil; Friedrich, Betty; Klink, Barbara; Deutsch, Andreas
2014-01-01
Changes in gene expression programs play a central role in cancer. Chromosomal aberrations such as deletions, duplications and translocations of DNA segments can lead to highly significant positive correlations of gene expression levels of neighboring genes. This should be utilized to improve the analysis of tumor expression profiles. Here, we develop a novel model class of autoregressive higher-order Hidden Markov Models (HMMs) that carefully exploit local data-dependent chromosomal dependencies to improve the identification of differentially expressed genes in tumor. Autoregressive higher-order HMMs overcome generally existing limitations of standard first-order HMMs in the modeling of dependencies between genes in close chromosomal proximity by the simultaneous usage of higher-order state-transitions and autoregressive emissions as novel model features. We apply autoregressive higher-order HMMs to the analysis of breast cancer and glioma gene expression data and perform in-depth model evaluation studies. We find that autoregressive higher-order HMMs clearly improve the identification of overexpressed genes with underlying gene copy number duplications in breast cancer in comparison to mixture models, standard first- and higher-order HMMs, and other related methods. The performance benefit is attributed to the simultaneous usage of higher-order state-transitions in combination with autoregressive emissions. This benefit could not be reached by using each of these two features independently. We also find that autoregressive higher-order HMMs are better able to identify differentially expressed genes in tumors independent of the underlying gene copy number status in comparison to the majority of related methods. This is further supported by the identification of well-known and of previously unreported hotspots of differential expression in glioblastomas demonstrating the efficacy of autoregressive higher-order HMMs for the analysis of individual tumor expression
1986-10-01
these theorems to find steady-state solutions of Markov chains are analysed. The results obtained in this way are then applied to quasi birth-death processes. Keywords: computations; algorithms; equalibrium equations.
Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian
2014-12-01
We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gold price effect on stock market: A Markov switching vector error correction approach
NASA Astrophysics Data System (ADS)
Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok
2014-06-01
Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.
Gedik, Ridvan; Zhang, Shengfan; Rainwater, Chase
2016-01-25
A relatively new consideration in proton therapy planning is the requirement that the mix of patients treated from different categories satisfy desired mix percentages. Deviations from these percentages and their impacts on operational capabilities are of particular interest to healthcare planners. In this study, we investigate intelligent ways of admitting patients to a proton therapy facility that maximize the total expected number of treatment sessions (fractions) delivered to patients in a planning period with stochastic patient arrivals and penalize the deviation from the patient mix restrictions. We propose a Markov Decision Process (MDP) model that provides very useful insights in determining the best patient admission policies in the case of an unexpected opening in the facility (i.e., no-shows, appointment cancellations, etc.). In order to overcome the curse of dimensionality for larger and more realistic instances, we propose an aggregate MDP model that is able to approximate optimal patient admission policies using the worded weight aggregation technique. Our models are applicable to healthcare treatment facilities throughout the United States, but are motivated by collaboration with the University of Florida Proton Therapy Institute (UFPTI).
Kim, M; Ghate, A; Phillips, M H
2009-07-21
The current state of the art in cancer treatment by radiation optimizes beam intensity spatially such that tumors receive high dose radiation whereas damage to nearby healthy tissues is minimized. It is common practice to deliver the radiation over several weeks, where the daily dose is a small constant fraction of the total planned. Such a 'fractionation schedule' is based on traditional models of radiobiological response where normal tissue cells possess the ability to repair sublethal damage done by radiation. This capability is significantly less prominent in tumors. Recent advances in quantitative functional imaging and biological markers are providing new opportunities to measure patient response to radiation over the treatment course. This opens the door for designing fractionation schedules that take into account the patient's cumulative response to radiation up to a particular treatment day in determining the fraction on that day. We propose a novel approach that, for the first time, mathematically explores the benefits of such fractionation schemes. This is achieved by building a stylistic Markov decision process (MDP) model, which incorporates some key features of the problem through intuitive choices of state and action spaces, as well as transition probability and reward functions. The structure of optimal policies for this MDP model is explored through several simple numerical examples.
NASA Astrophysics Data System (ADS)
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2017-01-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
A Markov Chain Monte Carlo Approach to Estimate AIDS after HIV Infection.
Apenteng, Ofosuhene O; Ismail, Noor Azina
2015-01-01
The spread of human immunodeficiency virus (HIV) infection and the resulting acquired immune deficiency syndrome (AIDS) is a major health concern in many parts of the world, and mathematical models are commonly applied to understand the spread of the HIV epidemic. To understand the spread of HIV and AIDS cases and their parameters in a given population, it is necessary to develop a theoretical framework that takes into account realistic factors. The current study used this framework to assess the interaction between individuals who developed AIDS after HIV infection and individuals who did not develop AIDS after HIV infection (pre-AIDS). We first investigated how probabilistic parameters affect the model in terms of the HIV and AIDS population over a period of time. We observed that there is a critical threshold parameter, R0, which determines the behavior of the model. If R0 ≤ 1, there is a unique disease-free equilibrium; if R0 < 1, the disease dies out; and if R0 > 1, the disease-free equilibrium is unstable. We also show how a Markov chain Monte Carlo (MCMC) approach could be used as a supplement to forecast the numbers of reported HIV and AIDS cases. An approach using a Monte Carlo analysis is illustrated to understand the impact of model-based predictions in light of uncertain parameters on the spread of HIV. Finally, to examine this framework and demonstrate how it works, a case study was performed of reported HIV and AIDS cases from an annual data set in Malaysia, and then we compared how these approaches complement each other. We conclude that HIV disease in Malaysia shows epidemic behavior, especially in the context of understanding and predicting emerging cases of HIV and AIDS.
NASA Astrophysics Data System (ADS)
Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.
2016-10-01
Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach to the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (< ˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe tradeoffs - an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.
Dynamic response of mechanical systems to impulse process stochastic excitations: Markov approach
NASA Astrophysics Data System (ADS)
Iwankiewicz, R.
2016-05-01
Methods for determination of the response of mechanical dynamic systems to Poisson and non-Poisson impulse process stochastic excitations are presented. Stochastic differential and integro-differential equations of motion are introduced. For systems driven by Poisson impulse process the tools of the theory of non-diffusive Markov processes are used. These are: the generalized Itô’s differential rule which allows to derive the differential equations for response moments and the forward integro-differential Chapman-Kolmogorov equation from which the equation governing the probability density of the response is obtained. The relation of Poisson impulse process problems to the theory of diffusive Markov processes is given. For systems driven by a class of non-Poisson (Erlang renewal) impulse processes an exact conversion of the original non-Markov problem into a Markov one is based on the appended Markov chain corresponding to the introduced auxiliary pure jump stochastic process. The derivation of the set of integro-differential equations for response probability density and also a moment equations technique are based on the forward integro-differential Chapman-Kolmogorov equation. An illustrating numerical example is also included.
Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach
ERIC Educational Resources Information Center
Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro
2005-01-01
Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…
Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach
ERIC Educational Resources Information Center
Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro
2005-01-01
Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1995-01-01
This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.
Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.
Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter
2016-04-01
Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery.
Memory kernel approach to generalized Pauli channels: Markovian, semi-Markov, and beyond
NASA Astrophysics Data System (ADS)
Siudzińska, Katarzyna; Chruściński, Dariusz
2017-08-01
In this paper we analyze the evolution of generalized Pauli channels governed by the memory kernel master equation. We provide necessary and sufficient conditions for the memory kernel to give rise to legitimate (completely positive and trace-preserving) quantum evolution. In particular, we analyze a class of kernels generating the quantum semi-Markov evolution, which is a natural generalization of the Markovian semigroup. Interestingly, the convex combination of Markovian semigroups goes beyond the semi-Markov case. Our analysis is illustrated with several examples.
Bennett, Casey C; Hauser, Kris
2013-01-01
In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal
Steen Magnussen
2009-01-01
Areas burned annually in 29 Canadian forest fire regions show a patchy and irregular correlation structure that significantly influences the distribution of annual totals for Canada and for groups of regions. A binary Monte Carlo Markov Chain (MCMC) is constructed for the purpose of joint simulation of regional areas burned in forest fires. For each year the MCMC...
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
ERIC Educational Resources Information Center
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
Fitting timeseries by continuous-time Markov chains: A quadratic programming approach
NASA Astrophysics Data System (ADS)
Crommelin, D. T.; Vanden-Eijnden, E.
2006-09-01
Construction of stochastic models that describe the effective dynamics of observables of interest is an useful instrument in various fields of application, such as physics, climate science, and finance. We present a new technique for the construction of such models. From the timeseries of an observable, we construct a discrete-in-time Markov chain and calculate the eigenspectrum of its transition probability (or stochastic) matrix. As a next step we aim to find the generator of a continuous-time Markov chain whose eigenspectrum resembles the observed eigenspectrum as closely as possible, using an appropriate norm. The generator is found by solving a minimization problem: the norm is chosen such that the object function is quadratic and convex, so that the minimization problem can be solved using quadratic programming techniques. The technique is illustrated on various toy problems as well as on datasets stemming from simulations of molecular dynamics and of atmospheric flows.
Fitting timeseries by continuous-time Markov chains: A quadratic programming approach
Crommelin, D.T. . E-mail: crommelin@cims.nyu.edu; Vanden-Eijnden, E. . E-mail: eve2@cims.nyu.edu
2006-09-20
Construction of stochastic models that describe the effective dynamics of observables of interest is an useful instrument in various fields of application, such as physics, climate science, and finance. We present a new technique for the construction of such models. From the timeseries of an observable, we construct a discrete-in-time Markov chain and calculate the eigenspectrum of its transition probability (or stochastic) matrix. As a next step we aim to find the generator of a continuous-time Markov chain whose eigenspectrum resembles the observed eigenspectrum as closely as possible, using an appropriate norm. The generator is found by solving a minimization problem: the norm is chosen such that the object function is quadratic and convex, so that the minimization problem can be solved using quadratic programming techniques. The technique is illustrated on various toy problems as well as on datasets stemming from simulations of molecular dynamics and of atmospheric flows.
Hidden Markov models and other machine learning approaches in computational molecular biology
Baldi, P.
1995-12-31
This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
A regular Markov source is defined as the output of a deterministic, but noisy, channel driven by the state sequence of a regular finite-state Markov chain. The rate of such a source is the per letter uncertainty of its digits. The well-known result that the rate of a unifilar regular Markov source is easily calculable is demonstrated, where unifilarity means that the present state of the Markov chain and the next output of the deterministic channel uniquely determine the next state. At present, there is no known method to calculate the rate of a nonunifilar source. Two tentative approaches to this unsolved problem are given, namely source identical twins and the master-slave source, which appear to shed some light on the question of rate calculation for a nonunifilar source.
Optimum equipment maintenance/replacement policy. Part 2: Markov decision approach
NASA Technical Reports Server (NTRS)
Charng, T.
1982-01-01
Dynamic programming was utilized as an alternative optimization technique to determine an optimal policy over a given time period. According to a joint effect of the probabilistic transition of states and the sequence of decision making, the optimal policy is sought such that a set of decisions optimizes the long-run expected average cost (or profit) per unit time. Provision of an alternative measure for the expected long-run total discounted costs is also considered. A computer program based on the concept of the Markov Decision Process was developed and tested. The program code listing, the statement of a sample problem, and the computed results are presented.
Optimum equipment maintenance/replacement policy. Part 2: Markov decision approach
NASA Technical Reports Server (NTRS)
Charng, T.
1982-01-01
Dynamic programming was utilized as an alternative optimization technique to determine an optimal policy over a given time period. According to a joint effect of the probabilistic transition of states and the sequence of decision making, the optimal policy is sought such that a set of decisions optimizes the long-run expected average cost (or profit) per unit time. Provision of an alternative measure for the expected long-run total discounted costs is also considered. A computer program based on the concept of the Markov Decision Process was developed and tested. The program code listing, the statement of a sample problem, and the computed results are presented.
Koenig, Lane; Zhang, Qian; Austin, Matthew S; Demiralp, Berna; Fehring, Thomas K; Feng, Chaoling; Mather, Richard C; Nguyen, Jennifer T; Saavoss, Asha; Springer, Bryan D; Yates, Adolph J
2016-12-01
rates to assess the robustness of our Markov model results. Compared with nonsurgical treatments, THA increased average annual productivity of patients by USD 9503 (95% CI, USD 1446-USD 17,812). We found that THA increases average lifetime direct costs by USD 30,365, which were offset by USD 63,314 in lifetime savings from increased productivity. With net societal savings of USD 32,948 per patient, total lifetime societal savings were estimated at almost USD 10 billion from more than 300,000 THAs performed in the United States each year. Using a Markov model approach, we show that THA produces societal benefits that can offset the costs of THA. When comparing THA with other nonsurgical treatments, policymakers should consider the long-term benefits associated with increased productivity from surgery. Level III, economic and decision analysis.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
Dufour, F.; Piunovskiy, A. B.
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.
Ficz, Gabriella; Wolf, Verena; Walter, Jörn
2016-01-01
DNA methylation and demethylation are opposing processes that when in balance create stable patterns of epigenetic memory. The control of DNA methylation pattern formation by replication dependent and independent demethylation processes has been suggested to be influenced by Tet mediated oxidation of 5mC. Several alternative mechanisms have been proposed suggesting that 5hmC influences either replication dependent maintenance of DNA methylation or replication independent processes of active demethylation. Using high resolution hairpin oxidative bisulfite sequencing data, we precisely determine the amount of 5mC and 5hmC and model the contribution of 5hmC to processes of demethylation in mouse ESCs. We develop an extended hidden Markov model capable of accurately describing the regional contribution of 5hmC to demethylation dynamics. Our analysis shows that 5hmC has a strong impact on replication dependent demethylation, mainly by impairing methylation maintenance. PMID:27224554
A Markov random field approach for modeling spatio-temporal evolution of microstructures
NASA Astrophysics Data System (ADS)
Acar, Pinar; Sundararaghavan, Veera
2016-10-01
The following problem is addressed: ‘Can one synthesize microstructure evolution over a large area given experimental movies measured over smaller regions?’ Our input is a movie of microstructure evolution over a small sample window. A Markov random field (MRF) algorithm is developed that uses this data to estimate the evolution of microstructure over a larger region. Unlike the standard microstructure reconstruction problem based on stationary images, the present algorithm is also able to reconstruct time-evolving phenomena such as grain growth. Such an algorithm would decrease the cost of full-scale microstructure measurements by coupling mathematical estimation with targeted small-scale spatiotemporal measurements. The grain size, shape and orientation distribution statistics of synthesized polycrystalline microstructures at different times are compared with the original movie to verify the method.
Scene estimation from speckled synthetic aperture radar imagery: Markov-random-field approach.
Lankoande, Ousseini; Hayat, Majeed M; Santhanam, Balu
2006-06-01
A novel Markov-random-field model for speckled synthetic aperture radar (SAR) imagery is derived according to the physical, spatial statistical properties of speckle noise in coherent imaging. A convex Gibbs energy function for speckled images is derived and utilized to perform speckle-compensating image estimation. The image estimation is formed by computing the conditional expectation of the noisy image at each pixel given its neighbors, which is further expressed in terms of the derived Gibbs energy function. The efficacy of the proposed technique, in terms of reducing speckle noise while preserving spatial resolution, is studied by using both real and simulated SAR imagery. Using a number of commonly used metrics, the performance of the proposed technique is shown to surpass that of existing speckle-noise-filtering methods such as the Gamma MAP, the modified Lee, and the enhanced Frost.
Higher-order phase shift reconstruction approach
Cong Wenxiang; Wang Ge
2010-10-15
Purpose: Biological soft tissues encountered in clinical and preclinical imaging mainly consists of atoms of light elements with low atomic numbers and their elemental composition is nearly uniform with little density variation. Hence, x-ray attenuation contrast is relatively poor and cannot achieve satisfactory sensitivity and specificity. In contrast, x-ray phase-contrast provides a new mechanism for soft tissue imaging. The x-ray phase shift of soft tissues is about a thousand times greater than the x-ray absorption over the diagnostic x-ray energy range, yielding a higher signal-to-noise ratio than the attenuation contrast counterpart. Thus, phase-contrast imaging is a promising technique to reveal detailed structural variation in soft tissues, offering a high contrast resolution between healthy and malignant tissues. Here the authors develop a novel phase retrieval method to reconstruct the phase image on the object plane from the intensity measurements. The reconstructed phase image is a projection of the phase shift induced by an object and serves as input to reconstruct the 3D refractive index distribution inside the object using a tomographic reconstruction algorithm. Such x-ray refractive index images can reveal structural features in soft tissues, with excellent resolution differentiating healthy and malignant tissues. Methods: A novel phase retrieval approach is proposed to reconstruct an x-ray phase image of an object based on the paraxial Fresnel-Kirchhoff diffraction theory. A primary advantage of the authors' approach is higher-order accuracy over that with the conventional linear approximation models, relaxing the current restriction of slow phase variation. The nonlinear terms in the autocorrelation equation of the Fresnel diffraction pattern are eliminated using intensity images measured at different distances in the Fresnel diffraction region, simplifying the phase reconstruction to a linear inverse problem. Numerical experiments are performed
CIGALEMC: GALAXY PARAMETER ESTIMATION USING A MARKOV CHAIN MONTE CARLO APPROACH WITH CIGALE
Serra, Paolo; Amblard, Alexandre; Temi, Pasquale; Im, Stephen; Noll, Stefan
2011-10-10
We introduce a fast Markov Chain Monte Carlo (MCMC) exploration of the astrophysical parameter space using a modified version of the publicly available code Code Investigating GALaxy Emission (CIGALE). The original CIGALE builds a grid of theoretical spectral energy distribution (SED) models and fits to photometric fluxes from ultraviolet to infrared to put constraints on parameters related to both formation and evolution of galaxies. Such a grid-based method can lead to a long and challenging parameter extraction since the computation time increases exponentially with the number of parameters considered and results can be dependent on the density of sampling points, which must be chosen in advance for each parameter. MCMC methods, on the other hand, scale approximately linearly with the number of parameters, allowing a faster and more accurate exploration of the parameter space by using a smaller number of efficiently chosen samples. We test our MCMC version of the code CIGALE (called CIGALEMC) with simulated data. After checking the ability of the code to retrieve the input parameters used to build the mock sample, we fit theoretical SEDs to real data from the well-known and -studied Spitzer Infrared Nearby Galaxy Survey sample. We discuss constraints on the parameters and show the advantages of our MCMC sampling method in terms of accuracy of the results and optimization of CPU time.
Fuzzy Hidden Markov Models: a new approach in multiple sequence alignment.
Collyda, Chrysa; Diplaris, Sotiris; Mitkas, Pericles A; Maglaveras, Nicos; Pappas, Costas
2006-01-01
This paper proposes a novel method for aligning multiple genomic or proteomic sequences using a fuzzyfied Hidden Markov Model (HMM). HMMs are known to provide compelling performance among multiple sequence alignment (MSA) algorithms, yet their stochastic nature does not help them cope with the existing dependence among the sequence elements. Fuzzy HMMs are a novel type of HMMs based on fuzzy sets and fuzzy integrals which generalizes the classical stochastic HMM, by relaxing its independence assumptions. In this paper, the fuzzy HMM model for MSA is mathematically defined. New fuzzy algorithms are described for building and training fuzzy HMMs, as well as for their use in aligning multiple sequences. Fuzzy HMMs can also increase the model capability of aligning multiple sequences mainly in terms of computation time. Modeling the multiple sequence alignment procedure with fuzzy HMMs can yield a robust and time-effective solution that can be widely used in bioinformatics in various applications, such as protein classification, phylogenetic analysis and gene prediction, among others.
SoDA2: a Hidden Markov Model approach for identification of immunoglobulin rearrangements
Munshaw, Supriya; Kepler, Thomas B.
2010-01-01
Motivation: The inference of pre-mutation immunoglobulin (Ig) rearrangements is essential in the study of the antibody repertoires produced in response to infection, in B-cell neoplasms and in autoimmune disease. Often, there are several rearrangements that are nearly equivalent as candidates for a given Ig gene, but have different consequences in an analysis. Our aim in this article is to develop a probabilistic model of the rearrangement process and a Bayesian method for estimating posterior probabilities for the comparison of multiple plausible rearrangements. Results: We have developed SoDA2, which is based on a Hidden Markov Model and used to compute the posterior probabilities of candidate rearrangements and to find those with the highest values among them. We validated the software on a set of simulated data, a set of clonally related sequences, and a group of randomly selected Ig heavy chains from Genbank. In most tests, SoDA2 performed better than other available software for the task. Furthermore, the output format has been redesigned, in part, to facilitate comparison of multiple solutions. Availability: SoDA2 is available online at https://hippocrates.duhs.duke.edu/soda. Simulated sequences are available upon request. Contact: kepler@duke.edu PMID:20147303
Link between unemployment and crime in the US: a Markov-Switching approach.
Fallahi, Firouz; Rodríguez, Gabriel
2014-05-01
This study has two goals. The first is to use Markov Switching models to identify and analyze the cycles in the unemployment rate and four different types of property-related criminal activities in the US. The second is to apply the nonparametric concordance index of Harding and Pagan (2006) to determine the correlation between the cycles of unemployment rate and property crimes. Findings show that there is a positive but insignificant relationship between the unemployment rate, burglary, larceny, and robbery. However, the unemployment rate has a significant and negative (i.e., a counter-cyclical) relationship with motor-vehicle theft. Therefore, more motor-vehicle thefts occur during economic expansions relative to contractions. Next, we divide the sample into three different subsamples to examine the consistency of the findings. The results show that the co-movements between the unemployment rate and property crimes during recession periods are much weaker, when compared with that of the normal periods of the US economy. Copyright © 2013 Elsevier Inc. All rights reserved.
Ma, Junsheng; Chan, Wenyaw; Tilley, Barbara C
2016-04-04
Continuous time Markov chain models are frequently employed in medical research to study the disease progression but are rarely applied to the transtheoretical model, a psychosocial model widely used in the studies of health-related outcomes. The transtheoretical model often includes more than three states and conceptually allows for all possible instantaneous transitions (referred to as general continuous time Markov chain). This complicates the likelihood function because it involves calculating a matrix exponential that may not be simplified for general continuous time Markov chain models. We undertook a Bayesian approach wherein we numerically evaluated the likelihood using ordinary differential equation solvers available from thegnuscientific library. We compared our Bayesian approach with the maximum likelihood method implemented with theRpackageMSM Our simulation study showed that the Bayesian approach provided more accurate point and interval estimates than the maximum likelihood method, especially in complex continuous time Markov chain models with five states. When applied to data from a four-state transtheoretical model collected from a nutrition intervention study in the next step trial, we observed results consistent with the results of the simulation study. Specifically, the two approaches provided comparable point estimates and standard errors for most parameters, but the maximum likelihood offered substantially smaller standard errors for some parameters. Comparable estimates of the standard errors are obtainable from packageMSM, which works only when the model estimation algorithm converges.
Quantum Enhanced Inference in Markov Logic Networks
NASA Astrophysics Data System (ADS)
Wittek, Peter; Gogolin, Christian
2017-04-01
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Cosmological constraints on generalized Chaplygin gas model: Markov Chain Monte Carlo approach
Xu, Lixin; Lu, Jianbo E-mail: lvjianbo819@163.com
2010-03-01
We use the Markov Chain Monte Carlo method to investigate a global constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Constitution dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a non-flat universe, the constraint results for GCG model are, Ω{sub b}h{sup 2} = 0.0235{sup +0.0021}{sub −0.0018} (1σ) {sup +0.0028}{sub −0.0022} (2σ), Ω{sub k} = 0.0035{sup +0.0172}{sub −0.0182} (1σ) {sup +0.0226}{sub −0.0204} (2σ), A{sub s} = 0.753{sup +0.037}{sub −0.035} (1σ) {sup +0.045}{sub −0.044} (2σ), α = 0.043{sup +0.102}{sub −0.106} (1σ) {sup +0.134}{sub −0.117} (2σ), and H{sub 0} = 70.00{sup +3.25}{sub −2.92} (1σ) {sup +3.77}{sub −3.67} (2σ), which is more stringent than the previous results for constraint on GCG model parameters. Furthermore, according to the information criterion, it seems that the current observations much support ΛCDM model relative to the GCG model.
Johannesson, G; Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G
2005-02-07
Estimating unknown system configurations/parameters by combining system knowledge gained from a computer simulation model on one hand and from observed data on the other hand is challenging. An example of such inverse problem is detecting and localizing potential flaws or changes in a structure by using a finite-element model and measured vibration/displacement data. We propose a probabilistic approach based on Bayesian methodology. This approach does not only yield a single best-guess solution, but a posterior probability distribution over the parameter space. In addition, the Bayesian approach provides a natural framework to accommodate prior knowledge. A Markov chain Monte Carlo (MCMC) procedure is proposed to generate samples from the posterior distribution (an ensemble of likely system configurations given the data). The MCMC procedure proposed explores the parameter space at different resolutions (scales), resulting in a more robust and efficient procedure. The large-scale exploration steps are carried out using coarser-resolution finite-element models, yielding a considerable decrease in computational time, which can be a crucial for large finite-element models. An application is given using synthetic displacement data from a simple cantilever beam with MCMC exploration carried out at three different resolutions.
A Markov decision process approach to multi-category patient scheduling in a diagnostic facility.
Gocgun, Yasin; Bresnahan, Brian W; Ghate, Archis; Gunn, Martin L
2011-10-01
To develop a mathematical model for multi-category patient scheduling decisions in computed tomography (CT), and to investigate associated tradeoffs from economic and operational perspectives. We modeled this decision-problem as a finite-horizon Markov decision process (MDP) with expected net CT revenue as the performance metric. The performance of optimal policies was compared with five heuristics using data from an urban hospital. In addition to net revenue, other patient-throughput and service-quality metrics were also used in this comparative analysis. The optimal policy had a threshold structure in the two-scanner case - it prioritized one type of patient when the queue-length for that type exceeded a threshold. The net revenue gap between the optimal policy and the heuristics ranged from 5% to 12%. This gap was 4% higher in the more congested, single-scanner system than in the two-scanner system. The performance of the net revenue maximizing policy was similar to the heuristics, when compared with respect to the alternative performance metrics in the two-scanner case. Under the optimal policy, the average number of patients that were not scanned by the end of the day, and the average patient waiting-time, were both nearly 80% smaller in the two-scanner case than in the single-scanner case. The net revenue gap between the optimal policy and the priority-based heuristics was nearly 2% smaller as compared to the first-come-first-served and random selection schemes. Net revenue was most sensitive to inpatient (IP) penalty costs in the single-scanner system, whereas to IP and outpatient revenues in the two-scanner case. The performance of the optimal policy is competitive with the operational and economic metrics considered in this paper. Such a policy can be implemented relatively easily and could be tested in practice in the future. The priority-based heuristics are next-best to the optimal policy and are much easier to implement. Copyright © 2011 Elsevier B
Uğuz, Harun; Güraksın, Gür Emre; Ergün, Uçman; Saraçoğlu, Rıdvan
2011-07-01
When the maximum likelihood approach (ML) is used during the calculation of the Discrete Hidden Markov Model (DHMM) parameters, DHMM parameters of the each class are only calculated using the training samples (positive training samples) of the same class. The training samples (negative training samples) not belonging to that class are not used in the calculation of DHMM model parameters. With the aim of supplying that deficiency, by involving the training samples of all classes in calculating processes, a Rocchio algorithm based approach is suggested. During the calculation period, in order to determine the most appropriate values of parameters for adjusting the relative effect of the positive and negative training samples, a Genetic algorithm is used as an optimization technique. The purposed method is used to classify the internal carotid artery Doppler signals recorded from 136 patients as well as of 55 healthy people. Our proposed method reached 97.38% classification accuracy with fivefold cross-validation (CV) technique. The classification results showed that the proposed method was effective for the classification of internal carotid artery Doppler signals.
Markov Chains and Chemical Processes
ERIC Educational Resources Information Center
Miller, P. J.
1972-01-01
Views as important the relating of abstract ideas of modern mathematics now being taught in the schools to situations encountered in the sciences. Describes use of matrices and Markov chains to study first-order processes. (Author/DF)
NASA Astrophysics Data System (ADS)
Wang, Jun; Yang, Xuezhi; Jia, Lu; Zhou, Fang; Ai, Jiaqiu
2017-04-01
The problem of change detection in bitemporal synthetic aperture radar (SAR) images is studied. Motivated by utilizing nondense neighborhoods around pixels to detect the change level, a pointwise change detection approach is developed by employing a bilaterally weighted graph model and an irregular Markov random field (I-MRF). First, keypoints with local maximum intensity are extracted from one of the bitemporal images to describe the textural information of the images. Then, two bilaterally weighted graphs with the same topology are constructed for the bitemporal images using the keypoints, respectively. They utilize both the spatial structural and intensity information to provide good performance for feature-based change detection. Next, a change measure function is designed to evaluate the similarity between the graphs, and then the nondense difference image (NDI) is generated. Finally, an I-MRF with a generalized neighborhood system is proposed to classify the discrete keypoints on the NDI. Experiments on real SAR images show that the proposed NDI improves separability between changed and unchanged areas, and I-MRF provides high accuracy and strong noise immunity for change detection tasks with noise-contaminated SAR images. On the whole, the proposed approach is a good candidate for SAR image change detection.
Geng, Bo; Zhou, Xiaobo; Zhu, Jinmin; Hung, Y S; Wong, Stephen T C
2008-04-01
Computational identification of missing enzymes plays a significant role in accurate and complete reconstruction of metabolic network for both newly sequenced and well-studied organisms. For a metabolic reaction, given a set of candidate enzymes identified according to certain biological evidences, a powerful mathematical model is required to predict the actual enzyme(s) catalyzing the reactions. In this study, several plausible predictive methods are considered for the classification problem in missing enzyme identification, and comparisons are performed with an aim to identify a method with better performance than the Bayesian model used in previous work. In particular, a regression model consisting of a linear term and a nonlinear term is proposed to apply to the problem, in which the reversible jump Markov-chain-Monte-Carlo (MCMC) learning technique (developed in [Andrieu C, Freitas Nando de, Doucet A. Robust full Bayesian learning for radial basis networks 2001;13:2359-407.]) is adopted to estimate the model order and the parameters. We evaluated the models using known reactions in Escherichia coli, Mycobacterium tuberculosis, Vibrio cholerae and Caulobacter cresentus bacteria, as well as one eukaryotic organism, Saccharomyces Cerevisiae. Although support vector regression also exhibits comparable performance in this application, it was demonstrated that the proposed model achieves favorable prediction performance, particularly sensitivity, compared with the Bayesian method.
NASA Astrophysics Data System (ADS)
Haruna, T.; Nakajima, K.
2013-06-01
The duality between values and orderings is a powerful tool to discuss relationships between various information-theoretic measures and their permutation analogues for discrete-time finite-alphabet stationary stochastic processes (SSPs). Applying it to output processes of hidden Markov models with ergodic internal processes, we have shown in our previous work that the excess entropy and the transfer entropy rate coincide with their permutation analogues. In this paper, we discuss two permutation characterizations of the two measures for general ergodic SSPs not necessarily having the Markov property assumed in our previous work. In the first approach, we show that the excess entropy and the transfer entropy rate of an ergodic SSP can be obtained as the limits of permutation analogues of them for the N-th order approximation by hidden Markov models, respectively. In the second approach, we employ the modified permutation partition of the set of words which considers equalities of symbols in addition to permutations of words. We show that the excess entropy and the transfer entropy rate of an ergodic SSP are equal to their modified permutation analogues, respectively.
Lee, J K; Thomas, D C
2000-11-01
Markov chain-Monte Carlo (MCMC) techniques for multipoint mapping of quantitative trait loci have been developed on nuclear-family and extended-pedigree data. These methods are based on repeated sampling-peeling and gene dropping of genotype vectors and random sampling of each of the model parameters from their full conditional distributions, given phenotypes, markers, and other model parameters. We further refine such approaches by improving the efficiency of the marker haplotype-updating algorithm and by adopting a new proposal for adding loci. Incorporating these refinements, we have performed an extensive simulation study on simulated nuclear-family data, varying the number of trait loci, family size, displacement, and other segregation parameters. Our simulation studies show that our MCMC algorithm identifies the locations of the true trait loci and estimates their segregation parameters well-provided that the total number of sibship pairs in the pedigree data is reasonably large, heritability of each individual trait locus is not too low, and the loci are not too close together. Our MCMC algorithm was shown to be significantly more efficient than LOKI (Heath 1997) in our simulation study using nuclear-family data.
Lee, Jae K.; Thomas, Duncan C.
2000-01-01
Markov chain–Monte Carlo (MCMC) techniques for multipoint mapping of quantitative trait loci have been developed on nuclear-family and extended-pedigree data. These methods are based on repeated sampling—peeling and gene dropping of genotype vectors and random sampling of each of the model parameters from their full conditional distributions, given phenotypes, markers, and other model parameters. We further refine such approaches by improving the efficiency of the marker haplotype-updating algorithm and by adopting a new proposal for adding loci. Incorporating these refinements, we have performed an extensive simulation study on simulated nuclear-family data, varying the number of trait loci, family size, displacement, and other segregation parameters. Our simulation studies show that our MCMC algorithm identifies the locations of the true trait loci and estimates their segregation parameters well—provided that the total number of sibship pairs in the pedigree data is reasonably large, heritability of each individual trait locus is not too low, and the loci are not too close together. Our MCMC algorithm was shown to be significantly more efficient than LOKI (Heath 1997) in our simulation study using nuclear-family data. PMID:11032787
Benoit, Julia S; Chan, Wenyaw; Luo, Sheng; Yeh, Hung-Wen; Doody, Rachelle
2016-04-30
Understanding the dynamic disease process is vital in early detection, diagnosis, and measuring progression. Continuous-time Markov chain (CTMC) methods have been used to estimate state-change intensities but challenges arise when stages are potentially misclassified. We present an analytical likelihood approach where the hidden state is modeled as a three-state CTMC model allowing for some observed states to be possibly misclassified. Covariate effects of the hidden process and misclassification probabilities of the hidden state are estimated without information from a 'gold standard' as comparison. Parameter estimates are obtained using a modified expectation-maximization (EM) algorithm, and identifiability of CTMC estimation is addressed. Simulation studies and an application studying Alzheimer's disease caregiver stress-levels are presented. The method was highly sensitive to detecting true misclassification and did not falsely identify error in the absence of misclassification. In conclusion, we have developed a robust longitudinal method for analyzing categorical outcome data when classification of disease severity stage is uncertain and the purpose is to study the process' transition behavior without a gold standard.
Serial Order: A Parallel Distributed Processing Approach.
ERIC Educational Resources Information Center
Jordan, Michael I.
Human behavior shows a variety of serially ordered action sequences. This paper presents a theory of serial order which describes how sequences of actions might be learned and performed. In this theory, parallel interactions across time (coarticulation) and parallel interactions across space (dual-task interference) are viewed as two aspects of a…
Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models
NASA Astrophysics Data System (ADS)
Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti
2016-10-01
A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.
Abstraction Augmented Markov Models.
Caragea, Cornelia; Silvescu, Adrian; Caragea, Doina; Honavar, Vasant
2010-12-13
High accuracy sequence classification often requires the use of higher order Markov models (MMs). However, the number of MM parameters increases exponentially with the range of direct dependencies between sequence elements, thereby increasing the risk of overfitting when the data set is limited in size. We present abstraction augmented Markov models (AAMMs) that effectively reduce the number of numeric parameters of k(th) order MMs by successively grouping strings of length k (i.e., k-grams) into abstraction hierarchies. We evaluate AAMMs on three protein subcellular localization prediction tasks. The results of our experiments show that abstraction makes it possible to construct predictive models that use significantly smaller number of features (by one to three orders of magnitude) as compared to MMs. AAMMs are competitive with and, in some cases, significantly outperform MMs. Moreover, the results show that AAMMs often perform significantly better than variable order Markov models, such as decomposed context tree weighting, prediction by partial match, and probabilistic suffix trees.
On the entropy of wide Markov chains
NASA Astrophysics Data System (ADS)
Girardin, Valerie
2011-03-01
Burg entropy concepts are here introduced in the field of wide Markov chains. These random sequences are the second-order equivalent of Markov chains: their future evolution, in terms of second order properties, conditional on the past and present, depends only on the present. Either periodically correlated or multivariate stationary, they can be characterized in terms of autoregressive models of order one.
NASA Astrophysics Data System (ADS)
Kelley, Nicholas W.; Vishal, V.; Krafft, Grant A.; Pande, Vijay S.
2008-12-01
Here, we present a novel computational approach for describing the formation of oligomeric assemblies at experimental concentrations and timescales. We propose an extension to the Markovian state model approach, where one includes low concentration oligomeric states analytically. This allows simulation on long timescales (seconds timescale) and at arbitrarily low concentrations (e.g., the micromolar concentrations found in experiments), while still using an all-atom model for protein and solvent. As a proof of concept, we apply this methodology to the oligomerization of an Aβ peptide fragment (Aβ21-43). Aβ oligomers are now widely recognized as the primary neurotoxic structures leading to Alzheimer's disease. Our computational methods predict that Aβ trimers form at micromolar concentrations in 10ms, while tetramers form 1000 times more slowly. Moreover, the simulation results predict specific intermonomer contacts present in the oligomer ensemble as well as putative structures for small molecular weight oligomers. Based on our simulations and statistical models, we propose a novel mutation to stabilize the trimeric form of Aβ in an experimentally verifiable manner.
Borchani, Hanen; Bielza, Concha; Martı Nez-Martı N, Pablo; Larrañaga, Pedro
2012-12-01
Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson's patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson's disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables.
NASA Astrophysics Data System (ADS)
Ramirez, A. L.; Foxall, W.
2011-12-01
Surface displacements caused by reservoir pressure perturbations resulting from CO2 injection can often be measured by geodetic methods such as InSAR, tilt and GPS. We have developed a Markov Chain Monte Carlo (MCMC) approach to invert surface displacements measured by InSAR to map the pressure distribution associated with CO2 injection at the In Salah Krechba field, Algeria. The MCMC inversion entails sampling the solution space by proposing a series of trial 3D pressure-plume models. In the case of In Salah, the range of allowable models is constrained by prior information provided by well and geophysical data for the reservoir and possible fluid pathways in the overburden, and injection pressures and volumes. Each trial pressure distribution source is run through a (mathematical) forward model to calculate a set of synthetic surface deformation data. The likelihood that a particular proposal represents the true source is determined from the fit of the calculated data to the InSAR measurements, and those having higher likelihoods are passed to the posterior distribution. This procedure is repeated over typically ~104 - 105 trials until the posterior distribution converges to a stable solution. The solution to each stochastic inversion is in the form of Bayesian posterior probability density function (pdf) over the range of the alternative models that are consistent with the measured data and prior information. Therefore, the solution provides not only the highest likelihood model but also a realistic estimate of the solution uncertainty. Our InSalah work considered three flow model alternatives: 1) The first model assumed that the CO2 saturation and fluid pressure changes were confined to the reservoir; 2) the second model allowed the perturbations to occur also in a damage zone inferred in the lower caprock from 3D seismic surveys; and 3) the third model allowed fluid pressure changes anywhere within the reservoir and overburden. Alternative (2) yielded optimal
Pouzat, Christophe; Delescluse, Matthieu; Viot, Pascal; Diebolt, Jean
2004-06-01
Spike-sorting techniques attempt to classify a series of noisy electrical waveforms according to the identity of the neurons that generated them. Existing techniques perform this classification ignoring several properties of actual neurons that can ultimately improve classification performance. In this study, we propose a more realistic spike train generation model. It incorporates both a description of "nontrivial" (i.e., non-Poisson) neuronal discharge statistics and a description of spike waveform dynamics (e.g., the events amplitude decays for short interspike intervals). We show that this spike train generation model is analogous to a one-dimensional Potts spin-glass model. We can therefore tailor to our particular case the computational methods that have been developed in fields where Potts models are extensively used, including statistical physics and image restoration. These methods are based on the construction of a Markov chain in the space of model parameters and spike train configurations, where a configuration is defined by specifying a neuron of origin for each spike. This Markov chain is built such that its unique stationary density is the posterior density of model parameters and configurations given the observed data. A Monte Carlo simulation of the Markov chain is then used to estimate the posterior density. We illustrate the way to build the transition matrix of the Markov chain with a simple, but realistic, model for data generation. We use simulated data to illustrate the performance of the method and to show that this approach can easily cope with neurons firing doublets of spikes and/or generating spikes with highly dynamic waveforms. The method cannot automatically find the "correct" number of neurons in the data. User input is required for this important problem and we illustrate how this can be done. We finally discuss further developments of the method.
Network Routing Using the Network Tasking Order, a Chron Approach
2015-03-26
some advanced prediction techniques that are utilized for traffic routing and management as well as some synchronization techniques are presented...NETWORK ROUTING USING THE NETWORK TASKING ORDER, A CHRON APPROACH THESIS Nicholas J. Paltzer...MS-15-M-059 NETWORK ROUTING USING THE NETWORK TASKING ORDER, A CHRON APPROACH THESIS Presented to the Faculty Department of Electrical
Markov Analysis of Sleep Dynamics
NASA Astrophysics Data System (ADS)
Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.
2009-05-01
A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.
Perturbative approach for non local and high order derivative theories
Avilez, Ana A.; Vergara, J. David
2009-04-20
We propose a reduction method of classical phase space of high order derivative theories in singular and non singular cases. The mechanism is to reduce the high order phase space by imposing suplementary constraints, such that the evolution takes place in a submanifold where high order degrees of freedom are absent. The reduced theory is ordinary and is cured of the usual high order theories diseases, it approaches well low energy dynamics.
Eaves, Lindon; Erkanli, Alaattin
2003-05-01
The linear structural model has provided the statistical backbone of the analysis of twin and family data for 25 years. A new generation of questions cannot easily be forced into the framework of current approaches to modeling and data analysis because they involve nonlinear processes. Maximizing the likelihood with respect to parameters of such nonlinear models is often cumbersome and does not yield easily to current numerical methods. The application of Markov Chain Monte Carlo (MCMC) methods to modeling the nonlinear effects of genes and environment in MZ and DZ twins is outlined. Nonlinear developmental change and genotype x environment interaction in the presence of genotype-environment correlation are explored in simulated twin data. The MCMC method recovers the simulated parameters and provides estimates of error and latent (missing) trait values. Possible limitations of MCMC methods are discussed. Further studies are necessary explore the value of an approach that could extend the horizons of research in developmental genetic epidemiology.
Markov stochasticity coordinates
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-01-01
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method-termed Markov Stochasticity Coordinates-is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
Blanchet, Juliette; Vignes, Matthieu
2009-03-01
The different measurement techniques that interrogate biological systems provide means for monitoring the behavior of virtually all cell components at different scales and from complementary angles. However, data generated in these experiments are difficult to interpret. A first difficulty arises from high-dimensionality and inherent noise of such data. Organizing them into meaningful groups is then highly desirable to improve our knowledge of biological mechanisms. A more accurate picture can be obtained when accounting for dependencies between components (e.g., genes) under study. A second difficulty arises from the fact that biological experiments often produce missing values. When it is not ignored, the latter issue has been solved by imputing the expression matrix prior to applying traditional analysis methods. Although helpful, this practice can lead to unsound results. We propose in this paper a statistical methodology that integrates individual dependencies in a missing data framework. More explicitly, we present a clustering algorithm dealing with incomplete data in a Hidden Markov Random Field context. This tackles the missing value issue in a probabilistic framework and still allows us to reconstruct missing observations a posteriori without imposing any pre-processing of the data. Experiments on synthetic data validate the gain in using our method, and analysis of real biological data shows its potential to extract biological knowledge.
Narasimhan, Vagheesh; Danecek, Petr; Scally, Aylwyn; Xue, Yali; Tyler-Smith, Chris; Durbin, Richard
2016-01-01
Summary: Runs of homozygosity (RoHs) are genomic stretches of a diploid genome that show identical alleles on both chromosomes. Longer RoHs are unlikely to have arisen by chance but are likely to denote autozygosity, whereby both copies of the genome descend from the same recent ancestor. Early tools to detect RoH used genotype array data, but substantially more information is available from sequencing data. Here, we present and evaluate BCFtools/RoH, an extension to the BCFtools software package, that detects regions of autozygosity in sequencing data, in particular exome data, using a hidden Markov model. By applying it to simulated data and real data from the 1000 Genomes Project we estimate its accuracy and show that it has higher sensitivity and specificity than existing methods under a range of sequencing error rates and levels of autozygosity. Availability and implementation: BCFtools/RoH and its associated binary/source files are freely available from https://github.com/samtools/BCFtools. Contact: vn2@sanger.ac.uk or pd3@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26826718
The fate of priority areas for conservation in protected areas: a fine-scale Markov chain approach.
Tattoni, Clara; Ciolli, Marco; Ferretti, Fabrizio
2011-02-01
Park managers in alpine areas must deal with the increase in forest coverage that has been observed in most European mountain areas, where traditional farming and agricultural practices have been abandoned. The aim of this study is to develop a fine-scale model of a broad area to support the managers of Paneveggio Nature Park (Italy) in conservation planning by focusing on the fate of priority areas for conservation in the next 50-100 years. GIS analyses were performed to assess the afforestation dynamic over time using two historical maps (from 1859 and 1936) and a series of aerial photographs and ortho-photos (taken from 1954 to 2006) covering a time span of 150 years. The results show an increase in the forest surface area of about 35%. Additionally, the forest became progressively more compact and less fragmented, with a consequent loss of ecotones and open habitats that are important for biodiversity. Markov chain-cellular automata models were used to project future changes, evaluating the effects on a habitat scale. Simulations show that some habitats defined as priority by the EU Habitat Directive will be compromised by the forest expansion by 2050 and suffer a consistent loss by 2100. This protocol, applied to other areas, can be used for designing long-term management measures with a focus on habitats where conservation status is at risk.
Borodovsky, M.
2013-04-11
Algorithmic methods for gene prediction have been developed and successfully applied to many different prokaryotic genome sequences. As the set of genes in a particular genome is not homogeneous with respect to DNA sequence composition features, the GeneMark.hmm program utilizes two Markov models representing distinct classes of protein coding genes denoted "typical" and "atypical". Atypical genes are those whose DNA features deviate significantly from those classified as typical and they represent approximately 10% of any given genome. In addition to the inherent interest of more accurately predicting genes, the atypical status of these genes may also reflect their separate evolutionary ancestry from other genes in that genome. We hypothesize that atypical genes are largely comprised of those genes that have been relatively recently acquired through lateral gene transfer (LGT). If so, what fraction of atypical genes are such bona fide LGTs? We have made atypical gene predictions for all fully completed prokaryotic genomes; we have been able to compare these results to other "surrogate" methods of LGT prediction.
Mina, Marco; Guzzi, Pietro Hiram
2014-01-01
The analysis of protein behavior at the network level had been applied to elucidate the mechanisms of protein interaction that are similar in different species. Published network alignment algorithms proved to be able to recapitulate known conserved modules and protein complexes, and infer new conserved interactions confirmed by wet lab experiments. In the meantime, however, a plethora of continuously evolving protein-protein interaction (PPI) data sets have been developed, each featuring different levels of completeness and reliability. For instance, algorithms performance may vary significantly when changing the data set used in their assessment. Moreover, existing papers did not deeply investigate the robustness of alignment algorithms. For instance, some algorithms performances vary significantly when changing the data set used in their assessment. In this work, we design an extensive assessment of current algorithms discussing the robustness of the results on the basis of input networks. We also present AlignMCL, a local network alignment algorithm based on an improved model of alignment graph and Markov Clustering. AlignMCL performs better than other state-of-the-art local alignment algorithms over different updated data sets. In addition, AlignMCL features high levels of robustness, producing similar results regardless the selected data set.
The Fate of Priority Areas for Conservation in Protected Areas: A Fine-Scale Markov Chain Approach
NASA Astrophysics Data System (ADS)
Tattoni, Clara; Ciolli, Marco; Ferretti, Fabrizio
2011-02-01
Park managers in alpine areas must deal with the increase in forest coverage that has been observed in most European mountain areas, where traditional farming and agricultural practices have been abandoned. The aim of this study is to develop a fine-scale model of a broad area to support the managers of Paneveggio Nature Park (Italy) in conservation planning by focusing on the fate of priority areas for conservation in the next 50-100 years. GIS analyses were performed to assess the afforestation dynamic over time using two historical maps (from 1859 and 1936) and a series of aerial photographs and ortho-photos (taken from 1954 to 2006) covering a time span of 150 years. The results show an increase in the forest surface area of about 35%. Additionally, the forest became progressively more compact and less fragmented, with a consequent loss of ecotones and open habitats that are important for biodiversity. Markov chain-cellular automata models were used to project future changes, evaluating the effects on a habitat scale. Simulations show that some habitats defined as priority by the EU Habitat Directive will be compromised by the forest expansion by 2050 and suffer a consistent loss by 2100. This protocol, applied to other areas, can be used for designing long-term management measures with a focus on habitats where conservation status is at risk.
Sumner, J G; Fernández-Sánchez, J; Jarvis, P D
2012-04-07
Recent work has discussed the importance of multiplicative closure for the Markov models used in phylogenetics. For continuous-time Markov chains, a sufficient condition for multiplicative closure of a model class is ensured by demanding that the set of rate-matrices belonging to the model class form a Lie algebra. It is the case that some well-known Markov models do form Lie algebras and we refer to such models as "Lie Markov models". However it is also the case that some other well-known Markov models unequivocally do not form Lie algebras (GTR being the most conspicuous example). In this paper, we will discuss how to generate Lie Markov models by demanding that the models have certain symmetries under nucleotide permutations. We show that the Lie Markov models include, and hence provide a unifying concept for, "group-based" and "equivariant" models. For each of two and four character states, the full list of Lie Markov models with maximal symmetry is presented and shown to include interesting examples that are neither group-based nor equivariant. We also argue that our scheme is pleasing in the context of applied phylogenetics, as, for a given symmetry of nucleotide substitution, it provides a natural hierarchy of models with increasing number of parameters. We also note that our methods are applicable to any application of continuous-time Markov chains beyond the initial motivations we take from phylogenetics. Crown Copyright Â© 2011. Published by Elsevier Ltd. All rights reserved.
Indexed semi-Markov process for wind speed modeling.
NASA Astrophysics Data System (ADS)
Petroni, F.; D'Amico, G.; Prattico, F.
2012-04-01
Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. In a previous work we proposed different semi-Markov models, showing their ability to reproduce the autocorrelation structures of wind speed data. In that paper we showed also that the autocorrelation is higher with respect to the Markov model. Unfortunately this autocorrelation was still too small compared to the empirical one. In order to overcome the problem of low autocorrelation, in this paper we propose an indexed semi-Markov model. More precisely we assume that wind speed is described by a discrete time homogeneous semi-Markov process. We introduce a memory index which takes into account the periods of different wind activities. With this model the statistical characteristics of wind speed are faithfully reproduced. The wind is a very unstable phenomenon characterized by a sequence of lulls and sustained speeds, and a good wind generator must be able to reproduce such sequences. To check the validity of the predictive semi-Markovian model, the persistence of synthetic winds were calculated, then averaged and computed. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic generation of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating
NASA Astrophysics Data System (ADS)
Behera, Mukunda D.; Borate, Santosh N.; Panda, Sudhindra N.; Behera, Priti R.; Roy, Partha S.
2012-08-01
Improper practices of land use and land cover (LULC) including deforestation, expansion of agriculture and infrastructure development are deteriorating watershed conditions. Here, we have utilized remote sensing and GIS tools to study LULC dynamics using Cellular Automata (CA)-Markov model and predicted the future LULC scenario, in terms of magnitude and direction, based on past trend in a hydrological unit, Choudwar watershed, India. By analyzing the LULC pattern during 1972, 1990, 1999 and 2005 using satellite-derived maps, we observed that the biophysical and socio-economic drivers including residential/industrial development, road-rail and settlement proximity have influenced the spatial pattern of the watershed LULC, leading to an accretive linear growth of agricultural and settlement areas. The annual rate of increase from 1972 to 2004 in agriculture land, settlement was observed to be 181.96, 9.89 ha/year, respectively, while decrease in forest, wetland and marshy land were 91.22, 27.56 and 39.52 ha/year, respectively. Transition probability and transition area matrix derived using inputs of (i) residential/industrial development and (ii) proximity to transportation network as the major causes. The predicted LULC scenario for the year 2014, with reasonably good accuracy would provide useful inputs to the LULC planners for effective management of the watershed. The study is a maiden attempt that revealed agricultural expansion is the main driving force for loss of forest, wetland and marshy land in the Choudwar watershed and has the potential to continue in future. The forest in lower slopes has been converted to agricultural land and may soon take a call on forests occurring on higher slopes. Our study utilizes three time period changes to better account for the trend and the modelling exercise; thereby advocates for better agricultural practices with additional energy subsidy to arrest further forest loss and LULC alternations.
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Numerical approach to differential equations of fractional order
NASA Astrophysics Data System (ADS)
Momani, Shaher; Odibat, Zaid
2007-10-01
In this paper, the variational iteration method and the Adomian decomposition method are implemented to give approximate solutions for linear and nonlinear systems of differential equations of fractional order. The two methods in applied mathematics can be used as alternative methods for obtaining analytic and approximate solutions for different types of differential equations. In these schemes, the solution takes the form of a convergent series with easily computable components. This paper presents a numerical comparison between the two methods for solving systems of fractional differential equations. Numerical results show that the two approaches are easy to implement and accurate when applied to differential equations of fractional order.
NASA Technical Reports Server (NTRS)
Panday, Prajjwal K.; Williams, Christopher A.; Frey, Karen E.; Brown, Molly E.
2013-01-01
Previous studies have drawn attention to substantial hydrological changes taking place in mountainous watersheds where hydrology is dominated by cryospheric processes. Modelling is an important tool for understanding these changes but is particularly challenging in mountainous terrain owing to scarcity of ground observations and uncertainty of model parameters across space and time. This study utilizes a Markov Chain Monte Carlo data assimilation approach to examine and evaluate the performance of a conceptual, degree-day snowmelt runoff model applied in the Tamor River basin in the eastern Nepalese Himalaya. The snowmelt runoff model is calibrated using daily streamflow from 2002 to 2006 with fairly high accuracy (average Nash-Sutcliffe metric approx. 0.84, annual volume bias <3%). The Markov Chain Monte Carlo approach constrains the parameters to which the model is most sensitive (e.g. lapse rate and recession coefficient) and maximizes model fit and performance. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall compared with simulations using observed station precipitation. The average snowmelt contribution to total runoff in the Tamor River basin for the 2002-2006 period is estimated to be 29.7+/-2.9% (which includes 4.2+/-0.9% from snowfall that promptly melts), whereas 70.3+/-2.6% is attributed to contributions from rainfall. On average, the elevation zone in the 4000-5500m range contributes the most to basin runoff, averaging 56.9+/-3.6% of all snowmelt input and 28.9+/-1.1% of all rainfall input to runoff. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall versus snowmelt compared with simulations using observed station precipitation. Model experiments indicate that the hydrograph itself does not constrain estimates of snowmelt versus rainfall contributions to total outflow but that this derives from the degree
Speckle reduction via higher order total variation approach.
Wensen Feng; Hong Lei; Yang Gao
2014-04-01
Multiplicative noise (also known as speckle) reduction is a prerequisite for many image-processing tasks in coherent imaging systems, such as the synthetic aperture radar. One approach extensively used in this area is based on total variation (TV) regularization, which can recover significantly sharp edges of an image, but suffers from the staircase-like artifacts. In order to overcome the undesirable deficiency, we propose two novel models for removing multiplicative noise based on total generalized variation (TGV) penalty. The TGV regularization has been mathematically proven to be able to eliminate the staircasing artifacts by being aware of higher order smoothness. Furthermore, an efficient algorithm is developed for solving the TGV-based optimization problems. Numerical experiments demonstrate that our proposed methods achieve state-of-the-art results, both visually and quantitatively. In particular, when the image has some higher order smoothness, our methods outperform the TV-based algorithms.
Structure of ordered coaxial and scroll nanotubes: general approach.
Khalitov, Zufar; Khadiev, Azat; Valeeva, Diana; Pashin, Dmitry
2016-01-01
The explicit formulas for atomic coordinates of multiwalled coaxial and cylindrical scroll nanotubes with ordered structure are developed on the basis of a common oblique lattice. According to this approach, a nanotube is formed by transfer of its bulk analogue structure onto a cylindrical surface (with a circular or spiral cross section) and the chirality indexes of the tube are expressed in the number of unit cells. The monoclinic polytypic modifications of ordered coaxial and scroll nanotubes are also discussed and geometrical conditions of their formation are analysed. It is shown that tube radii of ordered multiwalled coaxial nanotubes are multiples of the layer thickness, and the initial turn radius of the orthogonal scroll nanotube is a multiple of the same parameter or its half.
Stochastic seismic tomography by interacting Markov chains
NASA Astrophysics Data System (ADS)
Bottero, Alexis; Gesret, Alexandrine; Romary, Thomas; Noble, Mark; Maisons, Christophe
2016-10-01
Markov chain Monte Carlo sampling methods are widely used for non-linear Bayesian inversion where no analytical expression for the forward relation between data and model parameters is available. Contrary to the linear(ized) approaches, they naturally allow to evaluate the uncertainties on the model found. Nevertheless their use is problematic in high-dimensional model spaces especially when the computational cost of the forward problem is significant and/or the a posteriori distribution is multimodal. In this case, the chain can stay stuck in one of the modes and hence not provide an exhaustive sampling of the distribution of interest. We present here a still relatively unknown algorithm that allows interaction between several Markov chains at different temperatures. These interactions (based on importance resampling) ensure a robust sampling of any posterior distribution and thus provide a way to efficiently tackle complex fully non-linear inverse problems. The algorithm is easy to implement and is well adapted to run on parallel supercomputers. In this paper, the algorithm is first introduced and applied to a synthetic multimodal distribution in order to demonstrate its robustness and efficiency compared to a simulated annealing method. It is then applied in the framework of first arrival traveltime seismic tomography on real data recorded in the context of hydraulic fracturing. To carry out this study a wavelet-based adaptive model parametrization has been used. This allows to integrate the a priori information provided by sonic logs and to reduce optimally the dimension of the problem.
Assessing order-(N) approach to flexible multibody dynamics
NASA Astrophysics Data System (ADS)
Hu, Tsay-Hsin G.; Mordfin, Theodore G.; Singh, Sudeep; Singh, Anil; Kumar, Manoj
1992-08-01
An efficient simulation has been developed to analyze the dynamics and control of spacecraft comprised of multiple flexible articulating bodies. The implementation employs a typical order-(N) multibody dynamics approach coupled with a state-of-the-art symbolic equation optimization algorithm. The relationship among computational time, system topology, number of the bodies and number of modes per body is empirically determined. For practical application in a CHAIN topology, the computational time is proportional to NB exp 1.3 NM exp 1.85, where NB is the number of bodies, and NM is the number of modes per body. Applied to the analysis of Space Station Freedom, which has a TREE topology, the order of the method is demonstrated to be proportional to NM exp A, where A varies from 1.4 to 2.0.
Markov Chain Ontology Analysis (MCOA)
2012-01-01
Background Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. Results In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. Conclusion A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches
Steibel, Juan P; Wang, Heng; Zhong, Ping-Shou
2015-02-22
Allelic specific expression (ASE) increases our understanding of the genetic control of gene expression and its links to phenotypic variation. ASE testing is implemented through binomial or beta-binomial tests of sequence read counts of alternative alleles at a cSNP of interest in heterozygous individuals. This requires prior ascertainment of the cSNP genotypes for all individuals. To meet the needs, we propose hidden Markov methods to call SNPs from next generation RNA sequence data when ASE possibly exists. We propose two hidden Markov models (HMMs), HMM-ASE and HMM-NASE that consider or do not consider ASE, respectively, in order to improve genotyping accuracy. Both HMMs have the advantages of calling the genotypes of several SNPs simultaneously and allow mapping error which, respectively, utilize the dependence among SNPs and correct the bias due to mapping error. In addition, HMM-ASE exploits ASE information to further improve genotype accuracy when the ASE is likely to be present. Simulation results indicate that the HMMs proposed demonstrate a very good prediction accuracy in terms of controlling both the false discovery rate (FDR) and the false negative rate (FNR). When ASE is present, the HMM-ASE had a lower FNR than HMM-NASE, while both can control the false discovery rate (FDR) at a similar level. By exploiting linkage disequilibrium (LD), a real data application demonstrate that the proposed methods have better sensitivity and similar FDR in calling heterozygous SNPs than the VarScan method. Sensitivity and FDR are similar to that of the BCFtools and Beagle methods. The resulting genotypes show good properties for the estimation of the genetic parameters and ASE ratios. We introduce HMMs, which are able to exploit LD and account for the ASE and mapping errors, to simultaneously call SNPs from the next generation RNA sequence data. The method introduced can reliably call for cSNP genotypes even in the presence of ASE and under low sequencing coverage. As
Ashraf, Ahmed B; Gavenonis, Sara; Daye, Dania; Mies, Carolyn; Feldman, Michael; Rosen, Mark; Kontos, Despina
2011-01-01
We present a multichannel extension of Markov random fields (MRFs) for incorporating multiple feature streams in the MRF model. We prove that for making inference queries, any multichannel MRF can be reduced to a single channel MRF provided features in different channels are conditionally independent given the hidden variable, Using this result we incorporate kinetic feature maps derived from breast DCE MRI into the observation model of MRF for tumor segmentation. Our algorithm achieves an ROC AUC of 0.97 for tumor segmentation, We present a comparison against the commonly used approach of fuzzy C-means (FCM) and the more recent method of running FCM on enhancement variance features (FCM-VES). These previous methods give a lower AUC of 0.86 and 0.60 respectively, indicating the superiority of our algorithm. Finally, we investigate the effect of superior segmentation on predicting breast cancer recurrence using kinetic DCE MRI features from the segmented tumor regions. A linear prediction model shows significant prediction improvement when segmenting the tumor using the proposed method, yielding a correlation coefficient r = 0.78 (p < 0.05) to validated cancer recurrence probabilities, compared to 0.63 and 0.45 when using FCM and FCM-VES respectively.
Yamamoto, Toshiyuki; Shimojima, Keiko; Ondo, Yumiko; Imai, Katsumi; Chong, Pin Fee; Kira, Ryutaro; Amemiya, Mitsuhiro; Saito, Akira; Okamoto, Nobuhiko
2016-01-01
Next-generation sequencing (NGS) is widely used for the detection of disease-causing nucleotide variants. The challenges associated with detecting copy number variants (CNVs) using NGS analysis have been reported previously. Disease-related exome panels such as Illumina TruSight One are more cost-effective than whole-exome sequencing (WES) because of their selective target regions (~21% of the WES). In this study, CNVs were analyzed using data extracted through a disease-related exome panel analysis and the eXome Hidden Markov Model (XHMM). Samples from 61 patients with undiagnosed developmental delays and 52 healthy parents were included in this study. In the preliminary study to validate the constructed XHMM system (microarray-first approach), 34 patients who had previously been analyzed by chromosomal microarray testing were used. Among the five CNVs larger than 200 kb that were considered as non-pathogenic CNVs and were used as positive controls, four CNVs was successfully detected. The system was subsequently used to analyze different samples from 27 patients (NGS-first approach); 2 of these patients were successfully diagnosed as having pathogenic CNVs (an unbalanced translocation der(5)t(5;14) and a 16p11.2 duplication). These diagnoses were re-confirmed by chromosomal microarray testing and/or fluorescence in situ hybridization. The NGS-first approach generated no false-negative or false-positive results for pathogenic CNVs, indicating its high sensitivity and specificity in detecting pathogenic CNVs. The results of this study show the possible clinical utility of pathogenic CNV screening using disease-related exome panel analysis and XHMM. PMID:27579173
Markov state models of protein misfolding
NASA Astrophysics Data System (ADS)
Sirur, Anshul; De Sancho, David; Best, Robert B.
2016-02-01
Markov state models (MSMs) are an extremely useful tool for understanding the conformational dynamics of macromolecules and for analyzing MD simulations in a quantitative fashion. They have been extensively used for peptide and protein folding, for small molecule binding, and for the study of native ensemble dynamics. Here, we adapt the MSM methodology to gain insight into the dynamics of misfolded states. To overcome possible flaws in root-mean-square deviation (RMSD)-based metrics, we introduce a novel discretization approach, based on coarse-grained contact maps. In addition, we extend the MSM methodology to include "sink" states in order to account for the irreversibility (on simulation time scales) of processes like protein misfolding. We apply this method to analyze the mechanism of misfolding of tandem repeats of titin domains, and how it is influenced by confinement in a chaperonin-like cavity.
NASA Astrophysics Data System (ADS)
MacBean, Natasha; Disney, Mathias; Lewis, Philip; Ineson, Phil
2010-05-01
profile as a whole. We present results from an Observing System Simulation Experiment (OSSE) designed to investigate the impact of management and climate change on peatland carbon fluxes, as well as how observations from satellites may be able to constrain modeled carbon fluxes. We use an adapted version of the Carnegie-Ames-Stanford Approach (CASA) model (Potter et al., 1993) that includes a representation of methane dynamics (Potter, 1997). The model formulation is further modified to allow for assimilation of satellite observations of surface soil moisture and land surface temperature. The observations are used to update model estimates using a Metropolis Hastings Markov Chain Monte Carlo (MCMC) approach. We examine the effect of temporal frequency and precision of satellite observations with a view to establishing how, and at what level, such observations would make a significant improvement in model uncertainty. We compare this with the system characteristics of existing and future satellites. We believe this is the first attempt to assimilate surface soil moisture and land surface temperature into an ecosystem model that includes a full representation of CH4 flux. Bubier, J., and T. Moore (1994), An ecological perspective on methane emissions from northern wetlands, TREE, 9, 460-464. Charman, D. (2002), Peatlands and Environmental Change, JohnWiley and Sons, Ltd, England. Gorham, E. (1991), Northern peatlands: Role in the carbon cycle and probable responses to climatic warming, Ecological Applications, 1, 182-195. Lai, D. (2009), Methane dynamics in northern peatlands: A review, Pedosphere, 19, 409-421. Le Mer, J., and P. Roger (2001), Production, oxidation, emission and consumption of methane by soils: A review, European Journal of Soil Biology, 37, 25-50. Limpens, J., F. Berendse, J. Canadell, C. Freeman, J. Holden, N. Roulet, H. Rydin, and Potter, C. (1997), An ecosystem simulation model for methane production and emission from wetlands, Global Biogeochemical
NASA Astrophysics Data System (ADS)
Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.
2011-07-01
In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.
Ordered LOGIT Model approach for the determination of financial distress.
Kinay, B
2010-01-01
Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.
Laval, Guillaume; SanCristobal, Magali; Chevalet, Claude
2003-01-01
Maximum-likelihood and Bayesian (MCMC algorithm) estimates of the increase of the Wright-Malécot inbreeding coefficient, F(t), between two temporally spaced samples, were developed from the Dirichlet approximation of allelic frequency distribution (model MD) and from the admixture of the Dirichlet approximation and the probabilities of fixation and loss of alleles (model MDL). Their accuracy was tested using computer simulations in which F(t) = 10% or less. The maximum-likelihood method based on the model MDL was found to be the best estimate of F(t) provided that initial frequencies are known exactly. When founder frequencies are estimated from a limited set of founder animals, only the estimates based on the model MD can be used for the moment. In this case no method was found to be the best in all situations investigated. The likelihood and Bayesian approaches give better results than the classical F-statistics when markers exhibiting a low polymorphism (such as the SNP markers) are used. Concerning the estimations of the effective population size all the new estimates presented here were found to be better than the F-statistics classically used. PMID:12871924
Nielsen, Rasmus
2017-01-01
Admixture—the mixing of genomes from divergent populations—is increasingly appreciated as a central process in evolution. To characterize and quantify patterns of admixture across the genome, a number of methods have been developed for local ancestry inference. However, existing approaches have a number of shortcomings. First, all local ancestry inference methods require some prior assumption about the expected ancestry tract lengths. Second, existing methods generally require genotypes, which is not feasible to obtain for many next-generation sequencing projects. Third, many methods assume samples are diploid, however a wide variety of sequencing applications will fail to meet this assumption. To address these issues, we introduce a novel hidden Markov model for estimating local ancestry that models the read pileup data, rather than genotypes, is generalized to arbitrary ploidy, and can estimate the time since admixture during local ancestry inference. We demonstrate that our method can simultaneously estimate the time since admixture and local ancestry with good accuracy, and that it performs well on samples of high ploidy—i.e. 100 or more chromosomes. As this method is very general, we expect it will be useful for local ancestry inference in a wider variety of populations than what previously has been possible. We then applied our method to pooled sequencing data derived from populations of Drosophila melanogaster on an ancestry cline on the east coast of North America. We find that regions of local recombination rates are negatively correlated with the proportion of African ancestry, suggesting that selection against foreign ancestry is the least efficient in low recombination regions. Finally we show that clinal outlier loci are enriched for genes associated with gene regulatory functions, consistent with a role of regulatory evolution in ecological adaptation of admixed D. melanogaster populations. Our results illustrate the potential of local ancestry
Fractional System Identification: An Approach Using Continuous Order-Distributions
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Lorenzo, Carl F.
1999-01-01
This paper discusses the identification of fractional- and integer-order systems using the concept of continuous order-distribution. Based on the ability to define systems using continuous order-distributions, it is shown that frequency domain system identification can be performed using least squares techniques after discretizing the order-distribution.
Exploiting mid-range DNA patterns for sequence classification: binary abstraction Markov models
Shepard, Samuel S.; McSweeny, Andrew; Serpen, Gursel; Fedorov, Alexei
2012-01-01
Messenger RNA sequences possess specific nucleotide patterns distinguishing them from non-coding genomic sequences. In this study, we explore the utilization of modified Markov models to analyze sequences up to 44 bp, far beyond the 8-bp limit of conventional Markov models, for exon/intron discrimination. In order to analyze nucleotide sequences of this length, their information content is first reduced by conversion into shorter binary patterns via the application of numerous abstraction schemes. After the conversion of genomic sequences to binary strings, homogenous Markov models trained on the binary sequences are used to discriminate between exons and introns. We term this approach the Binary Abstraction Markov Model (BAMM). High-quality abstraction schemes for exon/intron discrimination are selected using optimization algorithms on supercomputers. The best MM classifiers are then combined using support vector machines into a single classifier. With this approach, over 95% classification accuracy is achieved without taking reading frame into account. With further development, the BAMM approach can be applied to sequences lacking the genetic code such as ncRNAs and 5′-untranslated regions. PMID:22344692
[Decision analysis in radiology using Markov models].
Golder, W
2000-01-01
Markov models (Multistate transition models) are mathematical tools to simulate a cohort of individuals followed over time to assess the prognosis resulting from different strategies. They are applied on the assumption that persons are in one of a finite number of states of health (Markov states). Each condition is given a transition probability as well as an incremental value. Probabilities may be chosen constant or varying over time due to predefined rules. Time horizon is divided into equal increments (Markov cycles). The model calculates quality-adjusted life expectancy employing real-life units and values and summing up the length of time spent in each health state adjusted for objective outcomes and subjective appraisal. This sort of modeling prognosis for a given patient is analogous to utility in common decision trees. Markov models can be evaluated by matrix algebra, probabilistic cohort simulation and Monte Carlo simulation. They have been applied to assess the relative benefits and risks of a limited number of diagnostic and therapeutic procedures in radiology. More interventions should be submitted to Markov analyses in order to elucidate their cost-effectiveness.
Homogeneous Superpixels from Markov Random Walks
NASA Astrophysics Data System (ADS)
Perbet, Frank; Stenger, Björn; Maki, Atsuto
This paper presents a novel algorithm to generate homogeneous superpixels from Markov random walks. We exploit Markov clustering (MCL) as the methodology, a generic graph clustering method based on stochastic flow circulation. In particular, we introduce a graph pruning strategy called compact pruning in order to capture intrinsic local image structure. The resulting superpixels are homogeneous, i.e. uniform in size and compact in shape. The original MCL algorithm does not scale well to a graph of an image due to the square computation of the Markov matrix which is necessary for circulating the flow. The proposed pruning scheme has the advantages of faster computation, smaller memory footprint, and straightforward parallel implementation. Through comparisons with other recent techniques, we show that the proposed algorithm achieves state-of-the-art performance.
Performability analysis using semi-Markov reward processes
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Marie, Raymond A.; Sericola, Bruno; Trivedi, Kishor S.
1990-01-01
Beaudry (1978) proposed a simple method of computing the distribution of performability in a Markov reward process. Two extensions of Beaudry's approach are presented. The method is generalized to a semi-Markov reward process by removing the restriction requiring the association of zero reward to absorbing states only. The algorithm proceeds by replacing zero-reward nonabsorbing states by a probabilistic switch; it is therefore related to the elimination of vanishing states from the reachability graph of a generalized stochastic Petri net and to the elimination of fast transient states in a decomposition approach to stiff Markov chains. The use of the approach is illustrated with three applications.
Performability analysis using semi-Markov reward processes
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Marie, Raymond A.; Sericola, Bruno; Trivedi, Kishor S.
1990-01-01
Beaudry (1978) proposed a simple method of computing the distribution of performability in a Markov reward process. Two extensions of Beaudry's approach are presented. The method is generalized to a semi-Markov reward process by removing the restriction requiring the association of zero reward to absorbing states only. The algorithm proceeds by replacing zero-reward nonabsorbing states by a probabilistic switch; it is therefore related to the elimination of vanishing states from the reachability graph of a generalized stochastic Petri net and to the elimination of fast transient states in a decomposition approach to stiff Markov chains. The use of the approach is illustrated with three applications.
Generalized semi-Markov quantum evolution
NASA Astrophysics Data System (ADS)
Chruściński, Dariusz; Kossakowski, Andrzej
2017-04-01
We provide a large class of quantum evolutions governed by the memory kernel master equation. This class defines a quantum analog of so-called semi-Markov classical stochastic dynamics. In this paper we provide a precise definition of quantum semi-Markov evolution, and using the appropriate gauge freedom we propose a suitable generalization which contains a majority of examples considered so far in the literature. The key concepts are quantum counterparts of classical waiting time distribution and survival probability—a quantum waiting time operator and a quantum survival operator, respectively. In particular collision models and their generalizations considered recently are special examples of generalized semi-Markov evolution. This approach allows for an interesting generalization of the trajectory description of the quantum dynamics in terms of positive operator-valued measure densities.
Evaluation of Usability Utilizing Markov Models
ERIC Educational Resources Information Center
Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane
2012-01-01
Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…
Evaluation of Usability Utilizing Markov Models
ERIC Educational Resources Information Center
Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane
2012-01-01
Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…
NASA Technical Reports Server (NTRS)
Smith, R. M.
1991-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.
Reports on Text Linguistics: Approaches to Word Order.
ERIC Educational Resources Information Center
Enkvist, Nils Erik; Kohonen, Viljo
This volume contains papers presented in connection with a symposium held in 1975 and sponsored by Abo Akademi, for the purpose of discussing ongoing research in word-order studies. Papers include: (1) a prolegomena by N.E. Enkvist; (2) "On the Ordering of Sister Constituents in Swedish," by E. Andersson; (3) "What is New…
Higher Order Thinking: Definition, Meaning and Instructional Approaches.
ERIC Educational Resources Information Center
Thomas, Ruth G., Ed.
This publication shares current thinking, research, and practice in the area of higher order thinking skills with home economics educators, including teachers, supervisors, and teacher educators. The first three articles provide general discussions of thinking skills. They are "Introduction" (Ruth Pestle); "Can Higher Order Thinking…
Williams, Claire; Lewsey, James D; Mackay, Daniel F; Briggs, Andrew H
2017-05-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results.
Williams, Claire; Lewsey, James D.; Mackay, Daniel F.; Briggs, Andrew H.
2016-01-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results. PMID:27698003
A Matrix Approach for General Higher Order Linear Recurrences
2011-01-01
properties of linear recurrences (such as the well-known Fibonacci and Pell sequences ). In [2], Er defined k linear recurring sequences of order at...the nth term of the ith generalized order-k Fibonacci sequence . Communicated by Lee See Keong. Received: March 26, 2009; Revised: August 28, 2009...6], the author gave the generalized order-k Fibonacci and Pell (F-P) sequence as follows: For m ≥ 0, n > 0 and 1 ≤ i ≤ k uin = 2 muin−1 + u i n−2
General and specific consciousness: a first-order representationalist approach.
Mehta, Neil; Mashour, George A
2013-01-01
It is widely acknowledged that a complete theory of consciousness should explain general consciousness (what makes a state conscious at all) and specific consciousness (what gives a conscious state its particular phenomenal quality). We defend first-order representationalism, which argues that consciousness consists of sensory representations directly available to the subject for action selection, belief formation, planning, etc. We provide a neuroscientific framework for this primarily philosophical theory, according to which neural correlates of general consciousness include prefrontal cortex, posterior parietal cortex, and non-specific thalamic nuclei, while neural correlates of specific consciousness include sensory cortex and specific thalamic nuclei. We suggest that recent data support first-order representationalism over biological theory, higher-order representationalism, recurrent processing theory, information integration theory, and global workspace theory.
Reduced-Order Modeling: New Approaches for Computational Physics
NASA Technical Reports Server (NTRS)
Beran, Philip S.; Silva, Walter A.
2001-01-01
In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.
General and specific consciousness: a first-order representationalist approach
Mehta, Neil; Mashour, George A.
2013-01-01
It is widely acknowledged that a complete theory of consciousness should explain general consciousness (what makes a state conscious at all) and specific consciousness (what gives a conscious state its particular phenomenal quality). We defend first-order representationalism, which argues that consciousness consists of sensory representations directly available to the subject for action selection, belief formation, planning, etc. We provide a neuroscientific framework for this primarily philosophical theory, according to which neural correlates of general consciousness include prefrontal cortex, posterior parietal cortex, and non-specific thalamic nuclei, while neural correlates of specific consciousness include sensory cortex and specific thalamic nuclei. We suggest that recent data support first-order representationalism over biological theory, higher-order representationalism, recurrent processing theory, information integration theory, and global workspace theory. PMID:23882231
Díez, Francisco J; Yebra, Mar; Bermejo, Iñigo; Palacios-Alonso, Miguel A; Calleja, Manuel Arias; Luque, Manuel; Pérez-Martín, Jorge
2017-02-01
Markov influence diagrams (MIDs) are a new type of probabilistic graphical model that extends influence diagrams in the same way that Markov decision trees extend decision trees. They have been designed to build state-transition models, mainly in medicine, and perform cost-effectiveness analyses. Using a causal graph that may contain several variables per cycle, MIDs can model various patient characteristics without multiplying the number of states; in particular, they can represent the history of the patient without using tunnel states. OpenMarkov, an open-source tool, allows the decision analyst to build and evaluate MIDs-including cost-effectiveness analysis and several types of deterministic and probabilistic sensitivity analysis-with a graphical user interface, without writing any code. This way, MIDs can be used to easily build and evaluate complex models whose implementation as spreadsheets or decision trees would be cumbersome or unfeasible in practice. Furthermore, many problems that previously required discrete event simulation can be solved with MIDs; i.e., within the paradigm of state-transition models, in which many health economists feel more comfortable.
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
An order statistics approach to the halo model for galaxies
NASA Astrophysics Data System (ADS)
Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.
2017-04-01
We use the halo model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the 'central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the lognormal distribution around this mean and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering; however, this model predicts no luminosity dependence of large-scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically underpredicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the halo model for galaxies with more physically motivated galaxy formation models.
An Order Statistics Approach to the Halo Model for Galaxies
NASA Astrophysics Data System (ADS)
Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.
2017-01-01
We use the Halo Model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the `central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the Lognormal distribution around this mean, and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering, however, this model predicts no luminosity dependence of large scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically under-predicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the Halo Model for galaxies with more physically motivated galaxy formation models.
Nonlinear Programming Approach to Optimal Scaling of Partially Ordered Categories
ERIC Educational Resources Information Center
Nishisato, Shizuhiko; Arri, P. S.
1975-01-01
A modified technique of separable programming was used to maximize the squared correlation ratio of weighted responses to partially ordered categories. The technique employs a polygonal approximation to each single-variable function by choosing mesh points around the initial approximation supplied by Nishisato's method. Numerical examples were…
A higher-order-statistics-based approach to face detection
NASA Astrophysics Data System (ADS)
Li, Chunming; Li, Yushan; Wu, Ruihong; Li, Qiuming; Zhuang, Qingde; Zhang, Zhan
2005-02-01
A face detection method based on higher order statistics is proposed in this paper. Firstly, the object model and noise model are established to extract moving object from the background according to the fact that higher order statistics is nonsense to Gaussian noise. Secondly, the improved Sobel operator is used to extract the edge image of moving object. And a projection function is used to detect the face in the edge image. Lastly, PCA(Principle Component Analysis) method is used to do face recognition. The performance of the system is evaluated on the real video sequences. It is shown that the proposed method is simple and robust to the detection of human faces in the video sequences.
Thermal nanostructure: an order parameter multiscale ensemble approach.
Cheluvaraja, S; Ortoleva, P
2010-02-21
Deductive all-atom multiscale techniques imply that many nanosystems can be understood in terms of the slow dynamics of order parameters that coevolve with the quasiequilibrium probability density for rapidly fluctuating atomic configurations. The result of this multiscale analysis is a set of stochastic equations for the order parameters whose dynamics is driven by thermal-average forces. We present an efficient algorithm for sampling atomistic configurations in viruses and other supramillion atom nanosystems. This algorithm allows for sampling of a wide range of configurations without creating an excess of high-energy, improbable ones. It is implemented and used to calculate thermal-average forces. These forces are then used to search the free-energy landscape of a nanosystem for deep minima. The methodology is applied to thermal structures of Cowpea chlorotic mottle virus capsid. The method has wide applicability to other nanosystems whose properties are described by the CHARMM or other interatomic force field. Our implementation, denoted SIMNANOWORLD, achieves calibration-free nanosystem modeling. Essential atomic-scale detail is preserved via a quasiequilibrium probability density while overall character is provided via predicted values of order parameters. Applications from virology to the computer-aided design of nanocapsules for delivery of therapeutic agents and of vaccines for nonenveloped viruses are envisioned.
Thermal nanostructure: An order parameter multiscale ensemble approach
NASA Astrophysics Data System (ADS)
Cheluvaraja, S.; Ortoleva, P.
2010-02-01
Deductive all-atom multiscale techniques imply that many nanosystems can be understood in terms of the slow dynamics of order parameters that coevolve with the quasiequilibrium probability density for rapidly fluctuating atomic configurations. The result of this multiscale analysis is a set of stochastic equations for the order parameters whose dynamics is driven by thermal-average forces. We present an efficient algorithm for sampling atomistic configurations in viruses and other supramillion atom nanosystems. This algorithm allows for sampling of a wide range of configurations without creating an excess of high-energy, improbable ones. It is implemented and used to calculate thermal-average forces. These forces are then used to search the free-energy landscape of a nanosystem for deep minima. The methodology is applied to thermal structures of Cowpea chlorotic mottle virus capsid. The method has wide applicability to other nanosystems whose properties are described by the CHARMM or other interatomic force field. Our implementation, denoted SIMNANOWORLD™, achieves calibration-free nanosystem modeling. Essential atomic-scale detail is preserved via a quasiequilibrium probability density while overall character is provided via predicted values of order parameters. Applications from virology to the computer-aided design of nanocapsules for delivery of therapeutic agents and of vaccines for nonenveloped viruses are envisioned.
Markov Modeling with Soft Aggregation for Safety and Decision Analysis
COOPER,J. ARLIN
1999-09-01
The methodology in this report improves on some of the limitations of many conventional safety assessment and decision analysis methods. A top-down mathematical approach is developed for decomposing systems and for expressing imprecise individual metrics as possibilistic or fuzzy numbers. A ''Markov-like'' model is developed that facilitates combining (aggregating) inputs into overall metrics and decision aids, also portraying the inherent uncertainty. A major goal of Markov modeling is to help convey the top-down system perspective. One of the constituent methodologies allows metrics to be weighted according to significance of the attribute and aggregated nonlinearly as to contribution. This aggregation is performed using exponential combination of the metrics, since the accumulating effect of such factors responds less and less to additional factors. This is termed ''soft'' mathematical aggregation. Dependence among the contributing factors is accounted for by incorporating subjective metrics on ''overlap'' of the factors as well as by correspondingly reducing the overall contribution of these combinations to the overall aggregation. Decisions corresponding to the meaningfulness of the results are facilitated in several ways. First, the results are compared to a soft threshold provided by a sigmoid function. Second, information is provided on input ''Importance'' and ''Sensitivity,'' in order to know where to place emphasis on considering new controls that may be necessary. Third, trends in inputs and outputs are tracked in order to obtain significant information% including cyclic information for the decision process. A practical example from the air transportation industry is used to demonstrate application of the methodology. Illustrations are given for developing a structure (along with recommended inputs and weights) for air transportation oversight at three different levels, for developing and using cycle information, for developing Importance and
Markov invariants, plethysms, and phylogenetics.
Sumner, J G; Charleston, M A; Jermiin, L S; Jarvis, P D
2008-08-07
We explore model-based techniques of phylogenetic tree inference exercising Markov invariants. Markov invariants are group invariant polynomials and are distinct from what is known in the literature as phylogenetic invariants, although we establish a commonality in some special cases. We show that the simplest Markov invariant forms the foundation of the Log-Det distance measure. We take as our primary tool group representation theory, and show that it provides a general framework for analyzing Markov processes on trees. From this algebraic perspective, the inherent symmetries of these processes become apparent, and focusing on plethysms, we are able to define Markov invariants and give existence proofs. We give an explicit technique for constructing the invariants, valid for any number of character states and taxa. For phylogenetic trees with three and four leaves, we demonstrate that the corresponding Markov invariants can be fruitfully exploited in applied phylogenetic studies.
Liao, Weinan; Ren, Jie; Wang, Kun; Wang, Shun; Zeng, Feng; Wang, Ying; Sun, Fengzhu
2016-01-01
The comparison between microbial sequencing data is critical to understand the dynamics of microbial communities. The alignment-based tools analyzing metagenomic datasets require reference sequences and read alignments. The available alignment-free dissimilarity approaches model the background sequences with Fixed Order Markov Chain (FOMC) yielding promising results for the comparison of microbial communities. However, in FOMC, the number of parameters grows exponentially with the increase of the order of Markov Chain (MC). Under a fixed high order of MC, the parameters might not be accurately estimated owing to the limitation of sequencing depth. In our study, we investigate an alternative to FOMC to model background sequences with the data-driven Variable Length Markov Chain (VLMC) in metatranscriptomic data. The VLMC originally designed for long sequences was extended to apply to high-throughput sequencing reads and the strategies to estimate the corresponding parameters were developed. The flexible number of parameters in VLMC avoids estimating the vast number of parameters of high-order MC under limited sequencing depth. Different from the manual selection in FOMC, VLMC determines the MC order adaptively. Several beta diversity measures based on VLMC were applied to compare the bacterial RNA-Seq and metatranscriptomic datasets. Experiments show that VLMC outperforms FOMC to model the background sequences in transcriptomic and metatranscriptomic samples. A software pipeline is available at https://d2vlmc.codeplex.com. PMID:27876823
Certifiable higher order sliding mode control: Practical stability margins approach
NASA Astrophysics Data System (ADS)
Panathula, Chandrasekhara Bharath
The Higher Order Sliding Mode (HOSM) controllers are well known for their robustness/insensitivity to bounded perturbations and for handling any given arbitrary relative degree system. The HOSM controller is to be certified for robustness to unmodeled dynamics, before deploying the controller for practical applications. Phase Margin (PM) and Gain Margin ( GM) are the classical characteristics used in linear systems to quantify the linear controller robustness to unmodeled dynamics, and certain values of these margins are required to certify the controller. These conventional margins (PM and GM) are extended to Practical Stability Phase Margin (PSPM) and Practical Stability Gain Margin (PSGM) in this dissertation, and are used to quantify the HOSM control robustness to unmodeled dynamics, presiding the tool to close the gap for HOSM control certification. The proposed robustness metrics ( PSPM and PSGM) are identified by developing tools/algorithms based on Describing Function-Harmonic Balance method. In order for the HOSM controller to achieve the prescribed values on robustness metrics ( PSPM and PSGM), the HOSM controller is cascaded with a linear compensator. A case study of the application of the proposed metrics (PSPM and PSGM) for the certification of F-16 aircraft HOSM attitude control robustness to cascade unmodeled dynamics is presented. In addition, several simulation examples are presented to verify and to validate the proposed methodology.
Building Markov state models with solvent dynamics.
Gu, Chen; Chang, Huang-Wei; Maibaum, Lutz; Pande, Vijay S; Carlsson, Gunnar E; Guibas, Leonidas J
2013-01-01
Markov state models have been widely used to study conformational changes of biological macromolecules. These models are built from short timescale simulations and then propagated to extract long timescale dynamics. However, the solvent information in molecular simulations are often ignored in current methods, because of the large number of solvent molecules in a system and the indistinguishability of solvent molecules upon their exchange. We present a solvent signature that compactly summarizes the solvent distribution in the high-dimensional data, and then define a distance metric between different configurations using this signature. We next incorporate the solvent information into the construction of Markov state models and present a fast geometric clustering algorithm which combines both the solute-based and solvent-based distances. We have tested our method on several different molecular dynamical systems, including alanine dipeptide, carbon nanotube, and benzene rings. With the new solvent-based signatures, we are able to identify different solvent distributions near the solute. Furthermore, when the solute has a concave shape, we can also capture the water number inside the solute structure. Finally we have compared the performances of different Markov state models. The experiment results show that our approach improves the existing methods both in the computational running time and the metastability. In this paper we have initiated an study to build Markov state models for molecular dynamical systems with solvent degrees of freedom. The methods we described should also be broadly applicable to a wide range of biomolecular simulation analyses.
Building Markov state models with solvent dynamics
2013-01-01
Background Markov state models have been widely used to study conformational changes of biological macromolecules. These models are built from short timescale simulations and then propagated to extract long timescale dynamics. However, the solvent information in molecular simulations are often ignored in current methods, because of the large number of solvent molecules in a system and the indistinguishability of solvent molecules upon their exchange. Methods We present a solvent signature that compactly summarizes the solvent distribution in the high-dimensional data, and then define a distance metric between different configurations using this signature. We next incorporate the solvent information into the construction of Markov state models and present a fast geometric clustering algorithm which combines both the solute-based and solvent-based distances. Results We have tested our method on several different molecular dynamical systems, including alanine dipeptide, carbon nanotube, and benzene rings. With the new solvent-based signatures, we are able to identify different solvent distributions near the solute. Furthermore, when the solute has a concave shape, we can also capture the water number inside the solute structure. Finally we have compared the performances of different Markov state models. The experiment results show that our approach improves the existing methods both in the computational running time and the metastability. Conclusions In this paper we have initiated an study to build Markov state models for molecular dynamical systems with solvent degrees of freedom. The methods we described should also be broadly applicable to a wide range of biomolecular simulation analyses. PMID:23368418
Inverting OII 83.4 nm dayglow profiles using Markov chain radiative transfer
NASA Astrophysics Data System (ADS)
Geddes, George; Douglas, Ewan; Finn, Susanna C.; Cook, Timothy; Chakrabarti, Supriya
2016-11-01
Emission profiles of the resonantly scattered OII 83.4 nm triplet can in principle be used to estimate O+ density profiles in the F2 region of the ionosphere. Given the emission source profile, solution of this inverse problem is possible but requires significant computation. The traditional Feautrier solution to the radiative transfer problem requires many iterations to converge, making it time consuming to compute. A Markov chain approach to the problem produces similar results by directly constructing a matrix that maps the source emission rate to an effective emission rate which includes scattering to all orders. The Markov chain approach presented here yields faster results and therefore can be used to perform the O+ density retrieval with higher resolution than would otherwise be possible.
NASA Astrophysics Data System (ADS)
Volchenkov, Dima; Dawin, Jean René
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
Stochastic motif extraction using hidden Markov model
Fujiwara, Yukiko; Asogawa, Minoru; Konagaya, Akihiko
1994-12-31
In this paper, we study the application of an HMM (hidden Markov model) to the problem of representing protein sequences by a stochastic motif. A stochastic protein motif represents the small segments of protein sequences that have a certain function or structure. The stochastic motif, represented by an HMM, has conditional probabilities to deal with the stochastic nature of the motif. This HMM directive reflects the characteristics of the motif, such as a protein periodical structure or grouping. In order to obtain the optimal HMM, we developed the {open_quotes}iterative duplication method{close_quotes} for HMM topology learning. It starts from a small fully-connected network and iterates the network generation and parameter optimization until it achieves sufficient discrimination accuracy. Using this method, we obtained an HMM for a leucine zipper motif. Compared to the accuracy of a symbolic pattern representation with accuracy of 14.8 percent, an HMM achieved 79.3 percent in prediction. Additionally, the method can obtain an HMM for various types of zinc finger motifs, and it might separate the mixed data. We demonstrated that this approach is applicable to the validation of the protein databases; a constructed HMM b as indicated that one protein sequence annotated as {open_quotes}lencine-zipper like sequence{close_quotes} in the database is quite different from other leucine-zipper sequences in terms of likelihood, and we found this discrimination is plausible.
Cover estimation and payload location using Markov random fields
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2014-02-01
Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.
Using Markov state models to study self-assembly
Perkett, Matthew R.; Hagan, Michael F.
2014-01-01
Markov state models (MSMs) have been demonstrated to be a powerful method for computationally studying intramolecular processes such as protein folding and macromolecular conformational changes. In this article, we present a new approach to construct MSMs that is applicable to modeling a broad class of multi-molecular assembly reactions. Distinct structures formed during assembly are distinguished by their undirected graphs, which are defined by strong subunit interactions. Spatial inhomogeneities of free subunits are accounted for using a recently developed Gaussian-based signature. Simplifications to this state identification are also investigated. The feasibility of this approach is demonstrated on two different coarse-grained models for virus self-assembly. We find good agreement between the dynamics predicted by the MSMs and long, unbiased simulations, and that the MSMs can reduce overall simulation time by orders of magnitude. PMID:24907984
Metastability in Markov processes
NASA Astrophysics Data System (ADS)
Larralde, H.; Leyvraz, F.; Sanders, D. P.
2006-08-01
We present a formalism for describing slowly decaying systems in the context of finite Markov chains obeying detailed balance. We show that phase space can be partitioned into approximately decoupled regions, in which one may introduce restricted Markov chains which are close to the original process but do not leave these regions. Within this context, we identify the conditions under which the decaying system can be considered to be in a metastable state. Furthermore, we show that such metastable states can be described in thermodynamic terms and define their free energy. This is accomplished, showing that the probability distribution describing the metastable state is indeed proportional to the equilibrium distribution, as is commonly assumed. We test the formalism numerically in the case of the two-dimensional kinetic Ising model, using the Wang-Landau algorithm to show this proportionality explicitly, and confirm that the proportionality constant is as derived in the theory. Finally, we extend the formalism to situations in which a system can have several metastable states.
An overview of Markov chain methods for the study of stage-sequential developmental processes.
Kapland, David
2008-03-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model. A special case of the mixture latent Markov model, the so-called mover-stayer model, is used in this study. Unconditional and conditional models are estimated for the manifest Markov model and the latent Markov model, where the conditional models include a measure of poverty status. Issues of model specification, estimation, and testing using the Mplus software environment are briefly discussed, and the Mplus input syntax is provided. The author applies these 4 methods to a single example of stage-sequential development in reading competency in the early school years, using data from the Early Childhood Longitudinal Study--Kindergarten Cohort.
Testing the Adequacy of a Semi-Markov Process
2015-09-17
and finance , there does not exist a theoretically-based, systematic method to determine if a semi-Markov process accurately fits the underlying data...fields, from finance and health care to game theory and reliability. Typically, these models decompose a complex system into a series of connected... finance [1], and computer science [14]. The basic approach to modeling with a semi- Markov process from observed data is the following: 1. Define
Metastability for Markov processes with detailed balance.
Larralde, Hernán; Leyvraz, François
2005-04-29
We present a definition for metastable states applicable to arbitrary finite state Markov processes satisfying detailed balance. In particular, we identify a crucial condition that distinguishes metastable states from other slow decaying modes and which allows us to show that our definition has several desirable properties similar to those postulated in the restricted ensemble approach. The intuitive physical meaning of this condition is simply that the total equilibrium probability of finding the system in the metastable state is negligible.
Metastability for Markov Processes with Detailed Balance
NASA Astrophysics Data System (ADS)
Larralde, Hernán; Leyvraz, François
2005-04-01
We present a definition for metastable states applicable to arbitrary finite state Markov processes satisfying detailed balance. In particular, we identify a crucial condition that distinguishes metastable states from other slow decaying modes and which allows us to show that our definition has several desirable properties similar to those postulated in the restricted ensemble approach. The intuitive physical meaning of this condition is simply that the total equilibrium probability of finding the system in the metastable state is negligible.
On Measures Driven by Markov Chains
NASA Astrophysics Data System (ADS)
Heurteaux, Yanick; Stos, Andrzej
2014-12-01
We study measures on which are driven by a finite Markov chain and which generalize the famous Bernoulli products.We propose a hands-on approach to determine the structure function and to prove that the multifractal formalism is satisfied. Formulas for the dimension of the measures and for the Hausdorff dimension of their supports are also provided. Finally, we identify the measures with maximal dimension.
Koukounari, Artemis; Donnelly, Christl A.; Moustaki, Irini; Tukahebwa, Edridah M.; Kabatereine, Narcis B.; Wilson, Shona; Webster, Joanne P.; Deelder, André M.; Vennervald, Birgitte J.; van Dam, Govert J.
2013-01-01
Regular treatment with praziquantel (PZQ) is the strategy for human schistosomiasis control aiming to prevent morbidity in later life. With the recent resolution on schistosomiasis elimination by the 65th World Health Assembly, appropriate diagnostic tools to inform interventions are keys to their success. We present a discrete Markov chains modelling framework that deals with the longitudinal study design and the measurement error in the diagnostic methods under study. A longitudinal detailed dataset from Uganda, in which one or two doses of PZQ treatment were provided, was analyzed through Latent Markov Models (LMMs). The aim was to evaluate the diagnostic accuracy of Circulating Cathodic Antigen (CCA) and of double Kato-Katz (KK) faecal slides over three consecutive days for Schistosoma mansoni infection simultaneously by age group at baseline and at two follow-up times post treatment. Diagnostic test sensitivities and specificities and the true underlying infection prevalence over time as well as the probabilities of transitions between infected and uninfected states are provided. The estimated transition probability matrices provide parsimonious yet important insights into the re-infection and cure rates in the two age groups. We show that the CCA diagnostic performance remained constant after PZQ treatment and that this test was overall more sensitive but less specific than single-day double KK for the diagnosis of S. mansoni infection. The probability of clearing infection from baseline to 9 weeks was higher among those who received two PZQ doses compared to one PZQ dose for both age groups, with much higher re-infection rates among children compared to adolescents and adults. We recommend LMMs as a useful methodology for monitoring and evaluation and treatment decision research as well as CCA for mapping surveys of S. mansoni infection, although additional diagnostic tools should be incorporated in schistosomiasis elimination programs. PMID:24367250
When memory pays: Discord in hidden Markov models
NASA Astrophysics Data System (ADS)
Lathouwers, Emma; Bechhoefer, John
2017-06-01
When is keeping a memory of observations worthwhile? We use hidden Markov models to look at phase transitions that emerge when comparing state estimates in systems with discrete states and noisy observations. We infer the underlying state of the hidden Markov models from the observations in two ways: through naive observations, which take into account only the current observation, and through Bayesian filtering, which takes the history of observations into account. Defining a discord order parameter to distinguish between the different state estimates, we explore hidden Markov models with various numbers of states and symbols and varying transition-matrix symmetry. All behave similarly. We calculate analytically the critical point where keeping a memory of observations starts to pay off. A mapping between hidden Markov models and Ising models gives added insight into the associated phase transitions.
Indekeu, J O; Koga, K; Hooyberghs, H; Parry, A O
2013-08-01
We study the effect of thermal fluctuations on the wetting phase transitions of infinite order and of continuously varying order, recently discovered within a mean-field density-functional model for three-phase equilibria in systems with short-range forces and a two-component order parameter. Using linear functional renormalization group calculations within a local interface Hamiltonian approach, we show that the infinite-order transitions are robust. The exponential singularity (implying 2-α(s)=∞) of the surface free energy excess at infinite-order wetting as well as the precise algebraic divergence (with β(s)=-1) of the wetting layer thickness are not modified as long as ω<2, with ω the dimensionless wetting parameter that measures the strength of thermal fluctuations. The interface width diverges algebraically and universally (with ν([perpendicular])=1/2). In contrast, the nonuniversal critical wetting transitions of finite but continuously varying order are modified when thermal fluctuations are taken into account, in line with predictions from earlier calculations on similar models displaying weak, intermediate, and strong fluctuation regimes.
A path-independent method for barrier option pricing in hidden Markov models
NASA Astrophysics Data System (ADS)
Rashidi Ranjbar, Hedieh; Seifi, Abbas
2015-12-01
This paper presents a method for barrier option pricing under a Black-Scholes model with Markov switching. We extend the option pricing method of Buffington and Elliott to price continuously monitored barrier options under a Black-Scholes model with regime switching. We use a regime switching random Esscher transform in order to determine an equivalent martingale pricing measure, and then solve the resulting multidimensional integral for pricing barrier options. We have calculated prices for down-and-out call options under a two-state hidden Markov model using two different Monte-Carlo simulation approaches and the proposed method. A comparison of the results shows that our method is faster than Monte-Carlo simulation methods.
NASA Astrophysics Data System (ADS)
Winkelmann, Stefanie; Schütte, Christof
2016-12-01
Accurate modeling and numerical simulation of reaction kinetics is a topic of steady interest. We consider the spatiotemporal chemical master equation (ST-CME) as a model for stochastic reaction-diffusion systems that exhibit properties of metastability. The space of motion is decomposed into metastable compartments, and diffusive motion is approximated by jumps between these compartments. Treating these jumps as first-order reactions, simulation of the resulting stochastic system is possible by the Gillespie method. We present the theory of Markov state models as a theoretical foundation of this intuitive approach. By means of Markov state modeling, both the number and shape of compartments and the transition rates between them can be determined. We consider the ST-CME for two reaction-diffusion systems and compare it to more detailed models. Moreover, a rigorous formal justification of the ST-CME by Galerkin projection methods is presented.
A non-homogeneous Markov model for phased-mission reliability analysis
NASA Technical Reports Server (NTRS)
Smotherman, Mark; Zemoudeh, Kay
1989-01-01
Three assumptions of Markov modeling for reliability of phased-mission systems that limit flexibility of representation are identified. The proposed generalization has the ability to represent state-dependent behavior, handle phases of random duration using globally time-dependent distributions of phase change time, and model globally time-dependent failure and repair rates. The approach is based on a single nonhomogeneous Markov model in which the concept of state transition is extended to include globally time-dependent phase changes. Phase change times are specified using nonoverlapping distributions with probability distribution functions that are zero outside assigned time intervals; the time intervals are ordered according to the phases. A comparison between a numerical solution of the model and simulation demonstrates that the numerical solution can be several times faster than simulation.
Estimating Neuronal Ageing with Hidden Markov Models
NASA Astrophysics Data System (ADS)
Wang, Bing; Pham, Tuan D.
2011-06-01
Neuronal degeneration is widely observed in normal ageing, meanwhile the neurode-generative disease like Alzheimer's disease effects neuronal degeneration in a faster way which is considered as faster ageing. Early intervention of such disease could benefit subjects with potentials of positive clinical outcome, therefore, early detection of disease related brain structural alteration is required. In this paper, we propose a computational approach for modelling the MRI-based structure alteration with ageing using hidden Markov model. The proposed hidden Markov model based brain structural model encodes intracortical tissue/fluid distribution using discrete wavelet transformation and vector quantization. Further, it captures gray matter volume loss, which is capable of reflecting subtle intracortical changes with ageing. Experiments were carried out on healthy subjects to validate its accuracy and robustness. Results have shown its ability of predicting the brain age with prediction error of 1.98 years without training data, which shows better result than other age predition methods.
Decoherence in quantum Markov chains
NASA Astrophysics Data System (ADS)
Santos, Raqueline Azevedo Medeiros; Portugal, Renato; Fragoso, Marcelo Dutra
2013-11-01
It is known that under some assumptions, the hitting time in quantum Markov chains is quadratically smaller than the hitting time in classical Markov chains. This work extends this result for decoherent quantum Markov chains. The decoherence is introduced using a percolation-like graph model, which allows us to define a decoherent quantum hitting time and to establish a decoherent-intensity range for which the decoherent quantum hitting time is quadratically smaller than the classical hitting time. The detection problem under decoherence is also solved with quadratic speedup in this range.
Constructing Dynamic Event Trees from Markov Models
Paolo Bucci; Jason Kirschenbaum; Tunc Aldemir; Curtis Smith; Ted Wood
2006-05-01
In the probabilistic risk assessment (PRA) of process plants, Markov models can be used to model accurately the complex dynamic interactions between plant physical process variables (e.g., temperature, pressure, etc.) and the instrumentation and control system that monitors and manages the process. One limitation of this approach that has prevented its use in nuclear power plant PRAs is the difficulty of integrating the results of a Markov analysis into an existing PRA. In this paper, we explore a new approach to the generation of failure scenarios and their compilation into dynamic event trees from a Markov model of the system. These event trees can be integrated into an existing PRA using software tools such as SAPHIRE. To implement our approach, we first construct a discrete-time Markov chain modeling the system of interest by: a) partitioning the process variable state space into magnitude intervals (cells), b) using analytical equations or a system simulator to determine the transition probabilities between the cells through the cell-to-cell mapping technique, and, c) using given failure/repair data for all the components of interest. The Markov transition matrix thus generated can be thought of as a process model describing the stochastic dynamic behavior of the finite-state system. We can therefore search the state space starting from a set of initial states to explore all possible paths to failure (scenarios) with associated probabilities. We can also construct event trees of arbitrary depth by tracing paths from a chosen initiating event and recording the following events while keeping track of the probabilities associated with each branch in the tree. As an example of our approach, we use the simple level control system often used as benchmark in the literature with one process variable (liquid level in a tank), and three control units: a drain unit and two supply units. Each unit includes a separate level sensor to observe the liquid level in the tank
Li, Hong-Dong; Xu, Qing-Song; Liang, Yi-Zeng
2012-08-31
The identification of disease-relevant genes represents a challenge in microarray-based disease diagnosis where the sample size is often limited. Among established methods, reversible jump Markov Chain Monte Carlo (RJMCMC) methods have proven to be quite promising for variable selection. However, the design and application of an RJMCMC algorithm requires, for example, special criteria for prior distributions. Also, the simulation from joint posterior distributions of models is computationally extensive, and may even be mathematically intractable. These disadvantages may limit the applications of RJMCMC algorithms. Therefore, the development of algorithms that possess the advantages of RJMCMC methods and are also efficient and easy to follow for selecting disease-associated genes is required. Here we report a RJMCMC-like method, called random frog that possesses the advantages of RJMCMC methods and is much easier to implement. Using the colon and the estrogen gene expression datasets, we show that random frog is effective in identifying discriminating genes. The top 2 ranked genes for colon and estrogen are Z50753, U00968, and Y10871_at, Z22536_at, respectively. (The source codes with GNU General Public License Version 2.0 are freely available to non-commercial users at: http://code.google.com/p/randomfrog/.).
Beccaro, M A Del; Villanueva, R; Knudson, K M; Harvey, E M; Langle, J M; Paul, W
2010-01-01
We sought to determine the frequency and type of decision support alerts by location and ordering provider role during Computerized Provider Order Entry (CPOE) medication ordering. Using these data we adjusted the decision support tools to reduce the number of alerts. Retrospective analyses were performed of dose range checks (DRC), drug-drug interaction and drug-allergy alerts from our electronic medical record. During seven sampling periods (each two weeks long) between April 2006 and October 2008 all alerts in these categories were analyzed. Another audit was performed of all DRC alerts by ordering provider role from November 2008 through January 2009. Medication ordering error counts were obtained from a voluntary error reporting system. MEASUREMENTRESULTS: Between April 2006 and October 2008 the percent of medication orders that triggered a dose range alert decreased from 23.9% to 7.4%. The relative risk (RR) for getting an alert was higher at the start of the interventions versus later (RR= 2.40, 95% CI 2.28-2.52; p< 0.0001). The percentage of medication orders that triggered alerts for drug-drug interactions also decreased from 13.5% to 4.8%. The RR for getting a drug interaction alert at the start was 1.63, 95% CI 1.60-1.66; p< 0.0001. Alerts decreased in all clinical areas without an increase in reported medication errors. We reduced the quantity of decision support alerts in CPOE using a systematic approach without an increase in reported medication errors.
Raberto, Marco; Rapallo, Fabio; Scalas, Enrico
2011-01-01
In this paper, we outline a model of graph (or network) dynamics based on two ingredients. The first ingredient is a Markov chain on the space of possible graphs. The second ingredient is a semi-Markov counting process of renewal type. The model consists in subordinating the Markov chain to the semi-Markov counting process. In simple words, this means that the chain transitions occur at random time instants called epochs. The model is quite rich and its possible connections with algebraic geometry are briefly discussed. Moreover, for the sake of simplicity, we focus on the space of undirected graphs with a fixed number of nodes. However, in an example, we present an interbank market model where it is meaningful to use directed graphs or even weighted graphs. PMID:21887245
Metrics for Labeled Markov Systems
NASA Technical Reports Server (NTRS)
Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash
1999-01-01
Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.
Markov chain Monte Carlo without likelihoods.
Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon
2003-12-23
Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.
Long-range memory and non-Markov statistical effects in human sensorimotor coordination
NASA Astrophysics Data System (ADS)
M. Yulmetyev, Renat; Emelyanova, Natalya; Hänggi, Peter; Gafarov, Fail; Prokhorov, Alexander
2002-12-01
In this paper, the non-Markov statistical processes and long-range memory effects in human sensorimotor coordination are investigated. The theoretical basis of this study is the statistical theory of non-stationary discrete non-Markov processes in complex systems (Phys. Rev. E 62, 6178 (2000)). The human sensorimotor coordination was experimentally studied by means of standard dynamical tapping test on the group of 32 young peoples with tap numbers up to 400. This test was carried out separately for the right and the left hand according to the degree of domination of each brain hemisphere. The numerical analysis of the experimental results was made with the help of power spectra of the initial time correlation function, the memory functions of low orders and the first three points of the statistical spectrum of non-Markovity parameter. Our observations demonstrate, that with the regard to results of the standard dynamic tapping-test it is possible to divide all examinees into five different dynamic types. We have introduced the conflict coefficient to estimate quantitatively the order-disorder effects underlying life systems. The last one reflects the existence of disbalance between the nervous and the motor human coordination. The suggested classification of the neurophysiological activity represents the dynamic generalization of the well-known neuropsychological types and provides the new approach in a modern neuropsychology.
A Markov Model for Assessing the Reliability of a Digital Feedwater Control System
Chu,T.L.; Yue, M.; Martinez-Guridi, G.; Lehner, J.
2009-02-11
A Markov approach has been selected to represent and quantify the reliability model of a digital feedwater control system (DFWCS). The system state, i.e., whether a system fails or not, is determined by the status of the components that can be characterized by component failure modes. Starting from the system state that has no component failure, possible transitions out of it are all failure modes of all components in the system. Each additional component failure mode will formulate a different system state that may or may not be a system failure state. The Markov transition diagram is developed by strictly following the sequences of component failures (i.e., failure sequences) because the different orders of the same set of failures may affect the system in completely different ways. The formulation and quantification of the Markov model, together with the proposed FMEA (Failure Modes and Effects Analysis) approach, and the development of the supporting automated FMEA tool are considered the three major elements of a generic conceptual framework under which the reliability of digital systems can be assessed.
Markov models and the ensemble Kalman filter for estimation of sorption rates.
Vugrin, Eric D.; McKenna, Sean Andrew; Vugrin, Kay White
2007-09-01
Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.
Phase transitions in Hidden Markov Models
NASA Astrophysics Data System (ADS)
Bechhoefer, John; Lathouwers, Emma
In Hidden Markov Models (HMMs), a Markov process is not directly accessible. In the simplest case, a two-state Markov model ``emits'' one of two ``symbols'' at each time step. We can think of these symbols as noisy measurements of the underlying state. With some probability, the symbol implies that the system is in one state when it is actually in the other. The ability to judge which state the system is in sets the efficiency of a Maxwell demon that observes state fluctuations in order to extract heat from a coupled reservoir. The state-inference problem is to infer the underlying state from such noisy measurements at each time step. We show that there can be a phase transition in such measurements: for measurement error rates below a certain threshold, the inferred state always matches the observation. For higher error rates, there can be continuous or discontinuous transitions to situations where keeping a memory of past observations improves the state estimate. We can partly understand this behavior by mapping the HMM onto a 1d random-field Ising model at zero temperature. We also present more recent work that explores a larger parameter space and more states. Research funded by NSERC, Canada.
Hidden Markov Model Analysis of Multichromophore Photobleaching
Messina, Troy C.; Kim, Hiyun; Giurleo, Jason T.; Talaga, David S.
2007-01-01
The interpretation of single-molecule measurements is greatly complicated by the presence of multiple fluorescent labels. However, many molecular systems of interest consist of multiple interacting components. We investigate this issue using multiply labeled dextran polymers that we intentionally photobleach to the background on a single-molecule basis. Hidden Markov models allow for unsupervised analysis of the data to determine the number of fluorescent subunits involved in the fluorescence intermittency of the 6-carboxy-tetramethylrhodamine labels by counting the discrete steps in fluorescence intensity. The Bayes information criterion allows us to distinguish between hidden Markov models that differ by the number of states, that is, the number of fluorescent molecules. We determine information-theoretical limits and show via Monte Carlo simulations that the hidden Markov model analysis approaches these theoretical limits. This technique has resolving power of one fluorescing unit up to as many as 30 fluorescent dyes with the appropriate choice of dye and adequate detection capability. We discuss the general utility of this method for determining aggregation-state distributions as could appear in many biologically important systems and its adaptability to general photometric experiments. PMID:16913765
A reward semi-Markov process with memory for wind speed modeling
NASA Astrophysics Data System (ADS)
Petroni, F.; D'Amico, G.; Prattico, F.
2012-04-01
Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.
On Markov modelling of near-wall turbulent shear flow
NASA Astrophysics Data System (ADS)
Reynolds, A. M.
1999-11-01
The role of Reynolds number in determining particle trajectories in near-wall turbulent shear flow is investigated in numerical simulations using a second-order Lagrangian stochastic (LS) model (Reynolds, A.M. 1999: A second-order Lagrangian stochastic model for particle trajectories in inhomogeneous turbulence. Quart. J. Roy. Meteorol. Soc. (In Press)). In such models, it is the acceleration, velocity and position of a particle rather than just its velocity and position which are assumed to evolve jointly as a continuous Markov process. It is found that Reynolds number effects are significant in determining simulated particle trajectories in the viscous sub-layer and the buffer zone. These effects are due almost entirely to the change in the Lagrangian integral timescale and are shown to be well represented in a first-order LS model by Sawford's correction footnote Sawford, B.L. 1991: Reynolds number effects in Lagrangian stochastic models of turbulent dispersion. Phys Fluids, 3, 1577-1586). This is found to remain true even when the Taylor-Reynolds number R_λ ~ O(0.1). This is somewhat surprising because the assumption of a Markovian evolution for velocity and position is strictly applicable only in the large Reynolds number limit because then the Lagrangian acceleration autocorrelation function approaches a delta function at the origin, corresponding to an uncorrelated component in the acceleration, and hence a Markov process footnote Borgas, M.S. and Sawford, B.L. 1991: The small-scale structure of acceleration correlations and its role in the statistical theory of turbulent dispersion. J. Fluid Mech. 288, 295-320.
A reduced-rank approach for implementing higher-order Volterra filters
NASA Astrophysics Data System (ADS)
O. Batista, Eduardo L.; Seara, Rui
2016-12-01
The use of Volterra filters in practical applications is often limited by their high computational burden. To cope with this problem, many strategies for implementing Volterra filters with reduced complexity have been proposed in the open literature. Some of these strategies are based on reduced-rank approaches obtained by defining a matrix of filter coefficients and applying the singular value decomposition to such a matrix. Then, discarding the smaller singular values, effective reduced-complexity Volterra implementations can be obtained. The application of this type of approach to higher-order Volterra filters (considering orders greater than 2) is however not straightforward, which is especially due to some difficulties encountered in the definition of higher-order coefficient matrices. In this context, the present paper is devoted to the development of a novel reduced-rank approach for implementing higher-order Volterra filters. Such an approach is based on a new form of Volterra kernel implementation that allows decomposing higher-order kernels into structures composed only of second-order kernels. Then, applying the singular value decomposition to the coefficient matrices of these second-order kernels, effective implementations for higher-order Volterra filters can be obtained. Simulation results are presented aiming to assess the effectiveness of the proposed approach.
Bayesian restoration of ion channel records using hidden Markov models.
Rosales, R; Stark, J A; Fitzgerald, W J; Hladky, S B
2001-03-01
Hidden Markov models have been used to restore recorded signals of single ion channels buried in background noise. Parameter estimation and signal restoration are usually carried out through likelihood maximization by using variants of the Baum-Welch forward-backward procedures. This paper presents an alternative approach for dealing with this inferential task. The inferences are made by using a combination of the framework provided by Bayesian statistics and numerical methods based on Markov chain Monte Carlo stochastic simulation. The reliability of this approach is tested by using synthetic signals of known characteristics. The expectations of the model parameters estimated here are close to those calculated using the Baum-Welch algorithm, but the present methods also yield estimates of their errors. Comparisons of the results of the Bayesian Markov Chain Monte Carlo approach with those obtained by filtering and thresholding demonstrate clearly the superiority of the new methods.
NASA Astrophysics Data System (ADS)
Zhu, Yanzheng; Zhang, Lixian; Sreeram, Victor; Shammakh, Wafa; Ahmad, Bashir
2016-10-01
In this paper, the resilient model approximation problem for a class of discrete-time Markov jump time-delay systems with input sector-bounded nonlinearities is investigated. A linearised reduced-order model is determined with mode changes subject to domination by a hierarchical Markov chain containing two different nonhomogeneous Markov chains. Hence, the reduced-order model obtained not only reflects the dependence of the original systems but also model external influence that is related to the mode changes of the original system. Sufficient conditions formulated in terms of bilinear matrix inequalities for the existence of such models are established, such that the resulting error system is stochastically stable and has a guaranteed l2-l∞ error performance. A linear matrix inequalities optimisation coupled with line search is exploited to solve for the corresponding reduced-order systems. The potential and effectiveness of the developed theoretical results are demonstrated via a numerical example.
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Mori-Zwanzig theory for dissipative forces in coarse-grained dynamics in the Markov limit
NASA Astrophysics Data System (ADS)
Izvekov, Sergei
2017-01-01
We derive alternative Markov approximations for the projected (stochastic) force and memory function in the coarse-grained (CG) generalized Langevin equation, which describes the time evolution of the center-of-mass coordinates of clusters of particles in the microscopic ensemble. This is done with the aid of the Mori-Zwanzig projection operator method based on the recently introduced projection operator [S. Izvekov, J. Chem. Phys. 138, 134106 (2013), 10.1063/1.4795091]. The derivation exploits the "generalized additive fluctuating force" representation to which the projected force reduces in the adopted projection operator formalism. For the projected force, we present a first-order time expansion which correctly extends the static fluctuating force ansatz with the terms necessary to maintain the required orthogonality of the projected dynamics in the Markov limit to the space of CG phase variables. The approximant of the memory function correctly accounts for the momentum dependence in the lowest (second) order and indicates that such a dependence may be important in the CG dynamics approaching the Markov limit. In the case of CG dynamics with a weak dependence of the memory effects on the particle momenta, the expression for the memory function presented in this work is applicable to non-Markov systems. The approximations are formulated in a propagator-free form allowing their efficient evaluation from the microscopic data sampled by standard molecular dynamics simulations. A numerical application is presented for a molecular liquid (nitromethane). With our formalism we do not observe the "plateau-value problem" if the friction tensors for dissipative particle dynamics (DPD) are computed using the Green-Kubo relation. Our formalism provides a consistent bottom-up route for hierarchical parametrization of DPD models from atomistic simulations.
Mori-Zwanzig theory for dissipative forces in coarse-grained dynamics in the Markov limit.
Izvekov, Sergei
2017-01-01
We derive alternative Markov approximations for the projected (stochastic) force and memory function in the coarse-grained (CG) generalized Langevin equation, which describes the time evolution of the center-of-mass coordinates of clusters of particles in the microscopic ensemble. This is done with the aid of the Mori-Zwanzig projection operator method based on the recently introduced projection operator [S. Izvekov, J. Chem. Phys. 138, 134106 (2013)10.1063/1.4795091]. The derivation exploits the "generalized additive fluctuating force" representation to which the projected force reduces in the adopted projection operator formalism. For the projected force, we present a first-order time expansion which correctly extends the static fluctuating force ansatz with the terms necessary to maintain the required orthogonality of the projected dynamics in the Markov limit to the space of CG phase variables. The approximant of the memory function correctly accounts for the momentum dependence in the lowest (second) order and indicates that such a dependence may be important in the CG dynamics approaching the Markov limit. In the case of CG dynamics with a weak dependence of the memory effects on the particle momenta, the expression for the memory function presented in this work is applicable to non-Markov systems. The approximations are formulated in a propagator-free form allowing their efficient evaluation from the microscopic data sampled by standard molecular dynamics simulations. A numerical application is presented for a molecular liquid (nitromethane). With our formalism we do not observe the "plateau-value problem" if the friction tensors for dissipative particle dynamics (DPD) are computed using the Green-Kubo relation. Our formalism provides a consistent bottom-up route for hierarchical parametrization of DPD models from atomistic simulations.
Teaching Higher Order Thinking in the Introductory MIS Course: A Model-Directed Approach
ERIC Educational Resources Information Center
Wang, Shouhong; Wang, Hai
2011-01-01
One vision of education evolution is to change the modes of thinking of students. Critical thinking, design thinking, and system thinking are higher order thinking paradigms that are specifically pertinent to business education. A model-directed approach to teaching and learning higher order thinking is proposed. An example of application of the…
Teaching Higher Order Thinking in the Introductory MIS Course: A Model-Directed Approach
ERIC Educational Resources Information Center
Wang, Shouhong; Wang, Hai
2011-01-01
One vision of education evolution is to change the modes of thinking of students. Critical thinking, design thinking, and system thinking are higher order thinking paradigms that are specifically pertinent to business education. A model-directed approach to teaching and learning higher order thinking is proposed. An example of application of the…
Higher-order terms in sensitivity analysis through a differential approach
Dubi, A.; Dudziak, D.J.
1981-06-01
A differential approach to sensitivity analysis has been developed that eliminates some difficulties existing in previous work. The new development leads to simple explicit expressions for the first-order perturbation as well as any higher-order terms. The higher-order terms are dependent only on differentials of the transport operator, the unperturbed flux, the adjoint flux, and the unperturbed Green's function of the system.
Generator estimation of Markov jump processes
NASA Astrophysics Data System (ADS)
Metzner, P.; Dittmer, E.; Jahnke, T.; Schütte, Ch.
2007-11-01
Estimating the generator of a continuous-time Markov jump process based on incomplete data is a problem which arises in various applications ranging from machine learning to molecular dynamics. Several methods have been devised for this purpose: a quadratic programming approach (cf. [D.T. Crommelin, E. Vanden-Eijnden, Fitting timeseries by continuous-time Markov chains: a quadratic programming approach, J. Comp. Phys. 217 (2006) 782-805]), a resolvent method (cf. [T. Müller, Modellierung von Proteinevolution, PhD thesis, Heidelberg, 2001]), and various implementations of an expectation-maximization algorithm ([S. Asmussen, O. Nerman, M. Olsson, Fitting phase-type distributions via the EM algorithm, Scand. J. Stat. 23 (1996) 419-441; I. Holmes, G.M. Rubin, An expectation maximization algorithm for training hidden substitution models, J. Mol. Biol. 317 (2002) 753-764; U. Nodelman, C.R. Shelton, D. Koller, Expectation maximization and complex duration distributions for continuous time Bayesian networks, in: Proceedings of the twenty-first conference on uncertainty in AI (UAI), 2005, pp. 421-430; M. Bladt, M. Sørensen, Statistical inference for discretely observed Markov jump processes, J.R. Statist. Soc. B 67 (2005) 395-410]). Some of these methods, however, seem to be known only in a particular research community, and have later been reinvented in a different context. The purpose of this paper is to compile a catalogue of existing approaches, to compare the strengths and weaknesses, and to test their performance in a series of numerical examples. These examples include carefully chosen model problems and an application to a time series from molecular dynamics.
A New Approach to Model Order Reduction of the Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Balajewicz, Maciej
A new method of stabilizing low-order, proper orthogonal decomposition based reduced-order models of the Navier-Stokes equations is proposed. Unlike traditional approaches, this method does not rely on empirical turbulence modeling or modification of the Navier-Stokes equations. It provides spatial basis functions different from the usual proper orthogonal decomposition basis function in that, in addition to optimally representing the solution, the new proposed basis functions also provide stable reduced-order models. The proposed approach is illustrated with two test cases: two-dimensional flow inside a square lid-driven cavity and a two-dimensional mixing layer.
On Markov parameters in system identification
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.
Iterative approach for zero-order term elimination in off-axis multiplex digital holography
NASA Astrophysics Data System (ADS)
Zhao, Dongliang; Xie, Dongzhuo; Yang, Yong; Zhai, Hongchen
2017-01-01
An iterative approach is proposed to eliminate the zero-order term from an off-axis multiplexed hologram that contains several sub-holograms. The zero-order components of each sub-hologram are effectively eliminated one by one using the proposed iterative procedure. Because of the reduction of the zero-order components in the frequency domain, enlarged filtering windows can be used to separate each of the +1 order components and improve the signal-to-noise ratio. The proposed method does not require prior knowledge of the object images, and only needs each of the reference wave intensities, which can be acquired before acquisition of the multiplexed hologram. The feasibility of the proposed approach is confirmed through mathematical deductions and numerical simulations, and the robustness of the proposed approach is verified using a practical multiplexed hologram.
Bizzotto, Roberto; Zamuner, Stefano; De Nicolao, Giuseppe; Karlsson, Mats O; Gomeni, Roberto
2010-04-01
Hypnotic drug development calls for a better understanding of sleep physiology in order to improve and differentiate novel medicines for the treatment of sleep disorders. On this basis, a proper evaluation of polysomnographic data collected in clinical trials conducted to explore clinical efficacy of novel hypnotic compounds should include the assessment of sleep architecture and its drug-induced changes. This work presents a non-linear mixed-effect Markov-chain model based on multinomial logistic functions which characterize the time course of transition probabilities between sleep stages in insomniac patients treated with placebo. Polysomnography measurements were obtained from patients during one night treatment. A population approach was used to describe the time course of sleep stages (awake stage, stage 1, stage 2, slow-wave sleep and REM sleep) using a Markov-chain model. The relationship between time and individual transition probabilities between sleep stages was modelled through piecewise linear multinomial logistic functions. The identification of the model produced a good adherence of mean post-hoc estimates to the observed transition frequencies. Parameters were generally well estimated in terms of CV, shrinkage and distribution of empirical Bayes estimates around the typical values. The posterior predictive check analysis showed good consistency between model-predicted and observed sleep parameters. In conclusion, the Markov-chain model based on multinomial logistic functions provided an accurate description of the time course of sleep stages together with an assessment of the probabilities of transition between different stages.
Alignment of multiple proteins with an ensemble of Hidden Markov Models
Song, Yinglei; Qu, Junfeng; Hura, Gurdeep S.
2011-01-01
In this paper, we developed a new method that progressively construct and update a set of alignments by adding sequences in certain order to each of the existing alignments. Each of the existing alignments is modelled with a profile Hidden Markov Model (HMM) and an added sequence is aligned to each of these profile HMMs. We introduced an integer parameter for the number of profile HMMs. The profile HMMs are then updated based on the alignments with leading scores. Our experiments on BaliBASE showed that our approach could efficiently explore the alignment space and significantly improve the alignment accuracy. PMID:20376922
NASA Astrophysics Data System (ADS)
Lalande, Jean-Marie; Waxler, Roger; Velea, Doru
2016-04-01
As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.
Markov chain Monte Carlo inference for Markov jump processes via the linear noise approximation.
Stathopoulos, Vassilios; Girolami, Mark A
2013-02-13
Bayesian analysis for Markov jump processes (MJPs) is a non-trivial and challenging problem. Although exact inference is theoretically possible, it is computationally demanding, thus its applicability is limited to a small class of problems. In this paper, we describe the application of Riemann manifold Markov chain Monte Carlo (MCMC) methods using an approximation to the likelihood of the MJP that is valid when the system modelled is near its thermodynamic limit. The proposed approach is both statistically and computationally efficient whereas the convergence rate and mixing of the chains allow for fast MCMC inference. The methodology is evaluated using numerical simulations on two problems from chemical kinetics and one from systems biology.
Likelihood free inference for Markov processes: a comparison.
Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S
2015-04-01
Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space.
Novel all-orders single-scale approach to QCD renormalization scale-setting
NASA Astrophysics Data System (ADS)
Shen, Jian-Ming; Wu, Xing-Gang; Du, Bo-Lun; Brodsky, Stanley J.
2017-05-01
The principle of maximal conformality (PMC) provides a rigorous method for eliminating renormalization scheme-and-scale ambiguities for perturbative QCD (pQCD) predictions. The PMC uses the renormalization group equation to fix the β pattern of each order in an arbitrary pQCD approximant, and it then determines the optimal renormalization scale by absorbing all {βi} terms into the running coupling at each order. The resulting coefficients of the pQCD series match the scheme-independent conformal series with β =0 . As in QED, different renormalization scales appear at each order; we call this the multiscale approach. In this paper, we present a novel single-scale approach for the PMC, in which a single effective scale is constructed to eliminate all nonconformal β terms up to a given order simultaneously. The PMC single-scale approach inherits the main features of the multiscale approach; for example, its predictions are scheme independent, and the pQCD convergence is greatly improved due to the elimination of divergent renormalon terms. As an application of the single-scale approach, we investigate the e+e- annihilation cross-section ratio Re+e- and the Higgs decay width Γ (H →b b ¯ ) , including four-loop QCD contributions. The resulting predictions are nearly identical to the multiscale predictions for both the total and differential contributions. Thus in many cases, the PMC single-scale approach PMC-s, which requires a simpler analysis, could be adopted as a reliable substitution for the PMC multiscale approach for setting the renormalization scale for high-energy processes, particularly when one does not need detailed information at each order. The elimination of the renormalization scale uncertainty increases the precision of tests of the Standard Model at the LHC.
Quantum hidden Markov models based on transition operation matrices
NASA Astrophysics Data System (ADS)
Cholewa, Michał; Gawron, Piotr; Głomb, Przemysław; Kurzyk, Dariusz
2017-04-01
In this work, we extend the idea of quantum Markov chains (Gudder in J Math Phys 49(7):072105 [3]) in order to propose quantum hidden Markov models (QHMMs). For that, we use the notions of transition operation matrices and vector states, which are an extension of classical stochastic matrices and probability distributions. Our main result is the Mealy QHMM formulation and proofs of algorithms needed for application of this model: Forward for general case and Vitterbi for a restricted class of QHMMs. We show the relations of the proposed model to other quantum HMM propositions and present an example of application.
Tracking Human Pose Using Max-Margin Markov Models.
Zhao, Lin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong
2015-12-01
We present a new method for tracking human pose by employing max-margin Markov models. Representing a human body by part-based models, such as pictorial structure, the problem of pose tracking can be modeled by a discrete Markov random field. Considering max-margin Markov networks provide an efficient way to deal with both structured data and strong generalization guarantees, it is thus natural to learn the model parameters using the max-margin technique. Since tracking human pose needs to couple limbs in adjacent frames, the model will introduce loops and will be intractable for learning and inference. Previous work has resorted to pose estimation methods, which discard temporal information by parsing frames individually. Alternatively, approximate inference strategies have been used, which can overfit to statistics of a particular data set. Thus, the performance and generalization of these methods are limited. In this paper, we approximate the full model by introducing an ensemble of two tree-structured sub-models, Markov networks for spatial parsing and Markov chains for temporal parsing. Both models can be trained jointly using the max-margin technique, and an iterative parsing process is proposed to achieve the ensemble inference. We apply our model on three challengeable data sets, which contains highly varied and articulated poses. Comprehensive experimental results demonstrate the superior performance of our method over the state-of-the-art approaches.
A high-order finite deformation phase-field approach to fracture
NASA Astrophysics Data System (ADS)
Weinberg, Kerstin; Hesch, Christian
2017-07-01
Phase-field approaches to fracture allow for convenient and efficient simulation of complex fracture pattern. In this paper, two variational formulations of phase-field fracture, a common second-order model and a new fourth-order model, are combined with a finite deformation ansatz for general nonlinear materials. The material model is based on a multiplicative decomposition of the principal stretches in a tensile and a compressive part. The excellent performance of the new approach is illustrated in classical numerical examples.
1994-05-01
LOGISTICS MANAGEMENT INSTITUTE An Approach for Meeting Customer Standards Under Executive Order 12862 Summary Executive Order 12862, Setting...search Centers all operate and manage wind tunnels for both NASA and indus- try customers . Nonetheless, a separate wind-tunnel process should be...could include the man- ager of the process, selected members of the manager’s staff, a key customer , and a survey expert. The manager and staff would
On a Result for Finite Markov Chains
ERIC Educational Resources Information Center
Kulathinal, Sangita; Ghosh, Lagnojita
2006-01-01
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…
A semi-Markov model with memory for price changes
NASA Astrophysics Data System (ADS)
D'Amico, Guglielmo; Petroni, Filippo
2011-12-01
We study the high-frequency price dynamics of traded stocks by means of a model of returns using a semi-Markov approach. More precisely we assume that the intraday returns are described by a discrete time homogeneous semi-Markov model which depends also on a memory index. The index is introduced to take into account periods of high and low volatility in the market. First of all we derive the equations governing the process and then theoretical results are compared with empirical findings from real data. In particular we analyzed high-frequency data from the Italian stock market from 1 January 2007 until the end of December 2010.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T.; Pande, Vijay S.
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016
a Markov-Process Inspired CA Model of Highway Traffic
NASA Astrophysics Data System (ADS)
Wang, Fa; Li, Li; Hu, Jian-Ming; Ji, Yan; Ma, Rui; Jiang, Rui
To provide a more accurate description of the driving behaviors especially in car-following, namely a Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed gap distribution. This new model provides a microscopic simulation explanation for the governing interaction forces (potentials) between the queuing vehicles, which cannot be directly measurable for traffic flow applications. The agreement between empirical observations and simulation results suggests the soundness of this new approach.
Asynchronous Dissipative Control for Fuzzy Markov Jump Systems.
Wu, Zheng-Guang; Dong, Shanling; Su, Hongye; Li, Chuandong
2017-08-25
The problem of asynchronous dissipative control is investigated for Takagi-Sugeno fuzzy systems with Markov jump in this paper. Hidden Markov model is introduced to represent the nonsynchronization between the designed controller and the original system. By the fuzzy-basis-dependent and mode-dependent Lyapunov function, a sufficient condition is achieved such that the resulting closed-loop system is stochastically stable with a strictly (Q, S, R)-α-dissipative performance. The controller parameter is derived by applying MATLAB to solve a set of linear matrix inequalities. Finally, we present two examples to confirm the validity and correctness of our developed approach.
2009-10-04
No5, 1932), 86. 116 Sir John Dill, Notes on the Tactical Lessons of the Palestine Rebellion. (London, 1936), 21 . 44 experience of internal ...nothing new; it has been a requirement of the military to support the national government in the quelling of internal public disorder since the...military approach, the application of minimal force, will constantly be under pressure. The British approach to public order and internal security
Influence of credit scoring on the dynamics of Markov chain
NASA Astrophysics Data System (ADS)
Galina, Timofeeva
2015-11-01
Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.
Students' Progress throughout Examination Process as a Markov Chain
ERIC Educational Resources Information Center
Hlavatý, Robert; Dömeová, Ludmila
2014-01-01
The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…
ERIC Educational Resources Information Center
Solomon, Sheila
This practicum study evaluated a non-basal, multidisciplinary, multisensory approach to teaching higher order reading comprehension skills to eight fifth-grade learning-disabled students from low socioeconomic minority group backgrounds. The four comprehension skills were: (1) identifying the main idea; (2) determining cause and effect; (3) making…
[Towards a clinical approach in institutions in order to enable dreams].
Ponroy, Annabelle
2013-01-01
Care protocols and their proliferation tend to dampen the enthusiasm of professionals in their daily practice. An institution's clinical approach must be designed in terms of admission in order notto leave madness on the threshold of care. Trusting the enthusiasm and desire of nurses means favouring creativity within practices.
A New Approach to Design of Cross-Linked Second-Order Nonlinear Optical Polymers
1990-09-01
PERSONI L ,UTH R(S) Skant rpa ty, Braja K. Mandal and Jayant Kumer 1. TYPE OFREPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 15. PAGE...Processing of NLO Materials. VIITIT; I A NEW APPROACH TO DESIGN OF CROSS-LINKED SECOND-ORDER NONLINEAR OPTICAL POLYMERS BRAJA K. MANDAL, JUN Y. LEE
Contemplative Practices and Orders of Consciousness: A Constructive-Developmental Approach
ERIC Educational Resources Information Center
Silverstein, Charles H.
2012-01-01
This qualitative study explores the correspondence between contemplative practices and "orders of consciousness" from a constructive-developmental perspective, using Robert Kegan's approach. Adult developmental growth is becoming an increasingly important influence on humanity's ability to deal effectively with the growing complexity of…
Contemplative Practices and Orders of Consciousness: A Constructive-Developmental Approach
ERIC Educational Resources Information Center
Silverstein, Charles H.
2012-01-01
This qualitative study explores the correspondence between contemplative practices and "orders of consciousness" from a constructive-developmental perspective, using Robert Kegan's approach. Adult developmental growth is becoming an increasingly important influence on humanity's ability to deal effectively with the growing complexity of…
Exact goodness-of-fit tests for Markov chains.
Besag, J; Mondal, D
2013-06-01
Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps.
A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2009-01-01
We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in
Modelling modal gating of ion channels with hierarchical Markov models
Fackrell, Mark; Crampin, Edmund J.; Taylor, Peter
2016-01-01
Many ion channels spontaneously switch between different levels of activity. Although this behaviour known as modal gating has been observed for a long time it is currently not well understood. Despite the fact that appropriately representing activity changes is essential for accurately capturing time course data from ion channels, systematic approaches for modelling modal gating are currently not available. In this paper, we develop a modular approach for building such a model in an iterative process. First, stochastic switching between modes and stochastic opening and closing within modes are represented in separate aggregated Markov models. Second, the continuous-time hierarchical Markov model, a new modelling framework proposed here, then enables us to combine these components so that in the integrated model both mode switching as well as the kinetics within modes are appropriately represented. A mathematical analysis reveals that the behaviour of the hierarchical Markov model naturally depends on the properties of its components. We also demonstrate how a hierarchical Markov model can be parametrized using experimental data and show that it provides a better representation than a previous model of the same dataset. Because evidence is increasing that modal gating reflects underlying molecular properties of the channel protein, it is likely that biophysical processes are better captured by our new approach than in earlier models. PMID:27616917
Markov Chain Monte Carlo and Irreversibility
NASA Astrophysics Data System (ADS)
Ottobre, Michela
2016-06-01
Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.
Equilibrium Control Policies for Markov Chains
Malikopoulos, Andreas
2011-01-01
The average cost criterion has held great intuitive appeal and has attracted considerable attention. It is widely employed when controlling dynamic systems that evolve stochastically over time by means of formulating an optimization problem to achieve long-term goals efficiently. The average cost criterion is especially appealing when the decision-making process is long compared to other timescales involved, and there is no compelling motivation to select short-term optimization. This paper addresses the problem of controlling a Markov chain so as to minimize the average cost per unit time. Our approach treats the problem as a dual constrained optimization problem. We derive conditions guaranteeing that a saddle point exists for the new dual problem and we show that this saddle point is an equilibrium control policy for each state of the Markov chain. For practical situations with constraints consistent to those we study here, our results imply that recognition of such saddle points may be of value in deriving in real time an optimal control policy.
A Markov model of the Indus script
Rao, Rajesh P. N.; Yadav, Nisha; Vahia, Mayank N.; Joglekar, Hrishikesh; Adhikari, R.; Mahadevan, Iravatham
2009-01-01
Although no historical information exists about the Indus civilization (flourished ca. 2600–1900 B.C.), archaeologists have uncovered about 3,800 short samples of a script that was used throughout the civilization. The script remains undeciphered, despite a large number of attempts and claimed decipherments over the past 80 years. Here, we propose the use of probabilistic models to analyze the structure of the Indus script. The goal is to reveal, through probabilistic analysis, syntactic patterns that could point the way to eventual decipherment. We illustrate the approach using a simple Markov chain model to capture sequential dependencies between signs in the Indus script. The trained model allows new sample texts to be generated, revealing recurring patterns of signs that could potentially form functional subunits of a possible underlying language. The model also provides a quantitative way of testing whether a particular string belongs to the putative language as captured by the Markov model. Application of this test to Indus seals found in Mesopotamia and other sites in West Asia reveals that the script may have been used to express different content in these regions. Finally, we show how missing, ambiguous, or unreadable signs on damaged objects can be filled in with most likely predictions from the model. Taken together, our results indicate that the Indus script exhibits rich synactic structure and the ability to represent diverse content. both of which are suggestive of a linguistic writing system rather than a nonlinguistic symbol system. PMID:19666571
A Markov model of the Indus script.
Rao, Rajesh P N; Yadav, Nisha; Vahia, Mayank N; Joglekar, Hrishikesh; Adhikari, R; Mahadevan, Iravatham
2009-08-18
Although no historical information exists about the Indus civilization (flourished ca. 2600-1900 B.C.), archaeologists have uncovered about 3,800 short samples of a script that was used throughout the civilization. The script remains undeciphered, despite a large number of attempts and claimed decipherments over the past 80 years. Here, we propose the use of probabilistic models to analyze the structure of the Indus script. The goal is to reveal, through probabilistic analysis, syntactic patterns that could point the way to eventual decipherment. We illustrate the approach using a simple Markov chain model to capture sequential dependencies between signs in the Indus script. The trained model allows new sample texts to be generated, revealing recurring patterns of signs that could potentially form functional subunits of a possible underlying language. The model also provides a quantitative way of testing whether a particular string belongs to the putative language as captured by the Markov model. Application of this test to Indus seals found in Mesopotamia and other sites in West Asia reveals that the script may have been used to express different content in these regions. Finally, we show how missing, ambiguous, or unreadable signs on damaged objects can be filled in with most likely predictions from the model. Taken together, our results indicate that the Indus script exhibits rich synactic structure and the ability to represent diverse content. both of which are suggestive of a linguistic writing system rather than a nonlinguistic symbol system.
The Analysis of Rush Orders Risk in Supply Chain: A Simulation Approach
NASA Technical Reports Server (NTRS)
Mahfouz, Amr; Arisha, Amr
2011-01-01
Satisfying customers by delivering demands at agreed time, with competitive prices, and in satisfactory quality level are crucial requirements for supply chain survival. Incidence of risks in supply chain often causes sudden disruptions in the processes and consequently leads to customers losing their trust in a company's competence. Rush orders are considered to be one of the main types of supply chain risks due to their negative impact on the overall performance, Using integrated definition modeling approaches (i.e. IDEF0 & IDEF3) and simulation modeling technique, a comprehensive integrated model has been developed to assess rush order risks and examine two risk mitigation strategies. Detailed functions sequence and objects flow were conceptually modeled to reflect on macro and micro levels of the studied supply chain. Discrete event simulation models were then developed to assess and investigate the mitigation strategies of rush order risks, the objective of this is to minimize order cycle time and cost.
Yao, Weiguang; Leszczynski, Konrad W
2009-07-01
Recently, the authors proposed an analytical scheme to estimate the first order x-ray scatter by approximating the Klein-Nishina formula so that the first order scatter fluence is expressed as a function of the primary photon fluence on the detector. In this work, the authors apply the scheme to experimentally obtained 6 MV cone beam CT projections in which the primary photon fluence is the unknown of interest. With the assumption that the higher-order scatter fluence is either constant or proportional to the first order scatter fluence, an iterative approach is proposed to estimate both primary and scatter fluences from projections by utilizing their relationship. The iterative approach is evaluated by comparisons with experimentally measured scatter-primary ratios of a Catphan phantom and with Monte Carlo simulations of virtual phantoms. The convergence of the iterations is fast and the accuracy of scatter correction is high. For a sufficiently long cylindrical water phantom with 10 cm of radius, the relative error of estimated primary photon fluence was within +/- 2% and +/- 4% when the phantom was projected with 6 MV and 120 kVp x-ray imaging systems, respectively. In addition, the iterative approach for scatter estimation is applied to 6 MV x-ray projections of a QUASAR and anthropomorphic phantoms (head and pelvis). The scatter correction is demonstrated to significantly improve the accuracy of the reconstructed linear attenuation coefficient and the contrast of the projections and reconstructed volumetric images generated with a linac 6 MV beam.
NASA Astrophysics Data System (ADS)
Mukherjee, Bijoy K.; Metia, Santanu
2009-10-01
The paper is divided into three parts. The first part gives a brief introduction to the overall paper, to fractional order PID (PIλDμ) controllers and to Genetic Algorithm (GA). In the second part, first it has been studied how the performance of an integer order PID controller deteriorates when implemented with lossy capacitors in its analog realization. Thereafter it has been shown that the lossy capacitors can be effectively modeled by fractional order terms. Then, a novel GA based method has been proposed to tune the controller parameters such that the original performance is retained even though realized with the same lossy capacitors. Simulation results have been presented to validate the usefulness of the method. Some Ziegler-Nichols type tuning rules for design of fractional order PID controllers have been proposed in the literature [11]. In the third part, a novel GA based method has been proposed which shows how equivalent integer order PID controllers can be obtained which will give performance level similar to those of the fractional order PID controllers thereby removing the complexity involved in the implementation of the latter. It has been shown with extensive simulation results that the equivalent integer order PID controllers more or less retain the robustness and iso-damping properties of the original fractional order PID controllers. Simulation results also show that the equivalent integer order PID controllers are more robust than the normal Ziegler-Nichols tuned PID controllers.
NASA Astrophysics Data System (ADS)
Mazaheri, Alireza; Nishikawa, Hiroaki
2016-09-01
We propose arbitrary high-order discontinuous Galerkin (DG) schemes that are designed based on a first-order hyperbolic advection-diffusion formulation of the target governing equations. We present, in details, the efficient construction of the proposed high-order schemes (called DG-H), and show that these schemes have the same number of global degrees-of-freedom as comparable conventional high-order DG schemes, produce the same or higher order of accuracy solutions and solution gradients, are exact for exact polynomial functions, and do not need a second-derivative diffusion operator. We demonstrate that the constructed high-order schemes give excellent quality solution and solution gradients on irregular triangular elements. We also construct a Weighted Essentially Non-Oscillatory (WENO) limiter for the proposed DG-H schemes and apply it to discontinuous problems. We also make some accuracy comparisons with conventional DG and interior penalty schemes. A relative qualitative cost analysis is also reported, which indicates that the high-order schemes produce orders of magnitude more accurate results than the low-order schemes for a given CPU time. Furthermore, we show that the proposed DG-H schemes are nearly as efficient as the DG and Interior-Penalty (IP) schemes as these schemes produce results that are relatively at the same error level for approximately a similar CPU time.
A second order kinetic approach for modeling solute retention and transport in soils
NASA Astrophysics Data System (ADS)
Selim, H. M.; Amacher, M. C.
1988-12-01
We present a second-order kinetic approach for the description of solute retention during transport in soils. The basis for this approach is that it accounts for the sites on the soil matrix which are accessible for retention of the reactive solutes in solution. This approach was incorporated with the fully kinetic two-site model where the difference between the characteristics of the two types of sites is based on the rate of kinetic retention reactions. We also assume that the retention mechanisms are site-specific, e.g., the sorbed phase on type 1 sites may be characteristically different in their energy of reaction and/or the solute species from that on type 2 sites. The second-order two-site (SOTS) model was capable of describing the kinetic retention behavior of Cr(VI) batch data for Olivier, Windsor, and Cecil soils. Using independently measured parameters, the SOTS model was successful in predicting experimental Cr breakthrough curves (BTC's). The proposed second-order approach was also extended to the diffusion controlled mobile-immobile or two-region (SOMIM) model. The use of estimated parameters (e.g., the mobile water fraction and mass transfer coefficients) for the SOMIM model did not provide improved predictions of Cr BTC's in comparison to the SOTS model. The failure of the mobile-immobile model was attributed to the lack of nonequilibrium conditions for the two regions in these soils.
Development of a three-dimensional high-order strand-grids approach
NASA Astrophysics Data System (ADS)
Tong, Oisin
Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening
Markov Tracking for Agent Coordination
NASA Technical Reports Server (NTRS)
Washington, Richard; Lau, Sonie (Technical Monitor)
1998-01-01
Partially observable Markov decision processes (POMDPs) axe an attractive representation for representing agent behavior, since they capture uncertainty in both the agent's state and its actions. However, finding an optimal policy for POMDPs in general is computationally difficult. In this paper we present Markov Tracking, a restricted problem of coordinating actions with an agent or process represented as a POMDP Because the actions coordinate with the agent rather than influence its behavior, the optimal solution to this problem can be computed locally and quickly. We also demonstrate the use of the technique on sequential POMDPs, which can be used to model a behavior that follows a linear, acyclic trajectory through a series of states. By imposing a "windowing" restriction that restricts the number of possible alternatives considered at any moment to a fixed size, a coordinating action can be calculated in constant time, making this amenable to coordination with complex agents.
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
Orbiting binary black hole evolutions with a multipatch high order finite-difference approach
Pazos, Enrique; Tiglio, Manuel; Duez, Matthew D.; Kidder, Lawrence E.; Teukolsky, Saul A.
2009-07-15
We present numerical simulations of orbiting black holes for around 12 cycles, using a high order multipatch approach. Unlike some other approaches, the computational speed scales almost perfectly for thousands of processors. Multipatch methods are an alternative to adaptive mesh refinement, with benefits of simplicity and better scaling for improving the resolution in the wave zone. The results presented here pave the way for multipatch evolutions of black hole-neutron star and neutron star-neutron star binaries, where high resolution grids are needed to resolve details of the matter flow.
Multiensemble Markov models of molecular thermodynamics and kinetics
Wu, Hao; Paul, Fabian; Noé, Frank
2016-01-01
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models—clustering of high-dimensional spaces and modeling of complex many-state systems—with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein–ligand binding model. PMID:27226302
Multiensemble Markov models of molecular thermodynamics and kinetics.
Wu, Hao; Paul, Fabian; Wehmeyer, Christoph; Noé, Frank
2016-06-07
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models-clustering of high-dimensional spaces and modeling of complex many-state systems-with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein-ligand binding model.
LD-SPatt: large deviations statistics for patterns on Markov chains.
Nuel, G
2004-01-01
Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.
Glaucoma progression detection using nonlocal Markov random field prior
Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A.; Balasubramanian, Madhusudhanan; Weinreb, Robert N.; Zangwill, Linda M.
2014-01-01
Abstract. Glaucoma is neurodegenerative disease characterized by distinctive changes in the optic nerve head and visual field. Without treatment, glaucoma can lead to permanent blindness. Therefore, monitoring glaucoma progression is important to detect uncontrolled disease and the possible need for therapy advancement. In this context, three-dimensional (3-D) spectral domain optical coherence tomography (SD-OCT) has been commonly used in the diagnosis and management of glaucoma patients. We present a new framework for detection of glaucoma progression using 3-D SD-OCT images. In contrast to previous works that use the retinal nerve fiber layer thickness measurement provided by commercially available instruments, we consider the whole 3-D volume for change detection. To account for the spatial voxel dependency, we propose the use of the Markov random field (MRF) model as a prior for the change detection map. In order to improve the robustness of the proposed approach, a nonlocal strategy was adopted to define the MRF energy function. To accommodate the presence of false-positive detection, we used a fuzzy logic approach to classify a 3-D SD-OCT image into a “non-progressing” or “progressing” glaucoma class. We compared the diagnostic performance of the proposed framework to the existing methods of progression detection. PMID:26158069
Glaucoma progression detection using nonlocal Markov random field prior.
Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Balasubramanian, Madhusudhanan; Weinreb, Robert N; Zangwill, Linda M
2014-10-01
Glaucoma is neurodegenerative disease characterized by distinctive changes in the optic nerve head and visual field. Without treatment, glaucoma can lead to permanent blindness. Therefore, monitoring glaucoma progression is important to detect uncontrolled disease and the possible need for therapy advancement. In this context, three-dimensional (3-D) spectral domain optical coherence tomography (SD-OCT) has been commonly used in the diagnosis and management of glaucoma patients. We present a new framework for detection of glaucoma progression using 3-D SD-OCT images. In contrast to previous works that use the retinal nerve fiber layer thickness measurement provided by commercially available instruments, we consider the whole 3-D volume for change detection. To account for the spatial voxel dependency, we propose the use of the Markov random field (MRF) model as a prior for the change detection map. In order to improve the robustness of the proposed approach, a nonlocal strategy was adopted to define the MRF energy function. To accommodate the presence of false-positive detection, we used a fuzzy logic approach to classify a 3-D SD-OCT image into a "non-progressing" or "progressing" glaucoma class. We compared the diagnostic performance of the proposed framework to the existing methods of progression detection.
Hamilton-Jacobi approach for first order actions and theories with higher derivatives
Bertin, M.C. Pimentel, B.M. Pompeia, P.J.
2008-03-15
In this work, we analyze systems described by Lagrangians with higher order derivatives in the context of the Hamilton-Jacobi formalism for first order actions. Two different approaches are studied here: the first one is analogous to the description of theories with higher derivatives in the hamiltonian formalism according to [D.M. Gitman, S.L. Lyakhovich, I.V. Tyutin, Soviet Phys. J. 26 (1983) 730; D.M. Gitman, I.V. Tyutin, Quantization of Fields with Constraints, Springer-Verlag, New York, Berlin, 1990] the second treats the case where degenerate coordinate are present, in an analogy to reference [D.M. Gitman, I.V. Tyutin, Nucl. Phys. B 630 (2002) 509]. Several examples are analyzed where a comparison between both approaches is made.
Hamilton Jacobi approach for first order actions and theories with higher derivatives
NASA Astrophysics Data System (ADS)
Bertin, M. C.; Pimentel, B. M.; Pompeia, P. J.
2008-03-01
In this work, we analyze systems described by Lagrangians with higher order derivatives in the context of the Hamilton-Jacobi formalism for first order actions. Two different approaches are studied here: the first one is analogous to the description of theories with higher derivatives in the hamiltonian formalism according to [D.M. Gitman, S.L. Lyakhovich, I.V. Tyutin, Soviet Phys. J. 26 (1983) 730; D.M. Gitman, I.V. Tyutin, Quantization of Fields with Constraints, Springer-Verlag, New York, Berlin, 1990] the second treats the case where degenerate coordinate are present, in an analogy to reference [D.M. Gitman, I.V. Tyutin, Nucl. Phys. B 630 (2002) 509]. Several examples are analyzed where a comparison between both approaches is made.
Bayesian seismic tomography by parallel interacting Markov chains
NASA Astrophysics Data System (ADS)
Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas
2014-05-01
The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the
Third-order coma-free point in two-mirror telescopes by a vector approach.
Ren, Baichuan; Jin, Guang; Zhong, Xing
2011-07-20
In this paper, two-mirror telescopes having the secondary mirror decentered and/or tilted are considered. Equations for third-order coma are derived by a vector approach. Coma-free condition to remove misalignment-induced coma was obtained. The coma-free point in two-mirror telescopes is found as a conclusion of our coma-free condition, which is in better agreement with the result solved by Wilson using Schiefspiegler theory.
Symbolic Heuristic Search for Factored Markov Decision Processes
NASA Technical Reports Server (NTRS)
Morris, Robert (Technical Monitor); Feng, Zheng-Zhu; Hansen, Eric A.
2003-01-01
We describe a planning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. State abstraction is used to avoid evaluating states individually. Forward search from a start state, guided by an admissible heuristic, is used to avoid evaluating all states. We combine these two approaches in a novel way that exploits symbolic model-checking techniques and demonstrates their usefulness for decision-theoretic planning.
A second order cone complementarity approach for the numerical solution of elastoplasticity problems
NASA Astrophysics Data System (ADS)
Zhang, L. L.; Li, J. Y.; Zhang, H. W.; Pan, S. H.
2013-01-01
In this paper we present a new approach for solving elastoplastic problems as second order cone complementarity problems (SOCCPs). Specially, two classes of elastoplastic problems, i.e. the J 2 plasticity problems with combined linear kinematic and isotropic hardening laws and the Drucker-Prager plasticity problems with associative or non-associative flow rules, are taken as the examples to illustrate the main idea of our new approach. In the new approach, firstly, the classical elastoplastic constitutive equations are equivalently reformulated as second order cone complementarity conditions. Secondly, by employing the finite element method and treating the nodal displacements and the plasticity multiplier vectors of Gaussian integration points as the unknown variables, we obtain a standard SOCCP formulation for the elastoplasticity analysis, which enables the using of general SOCCP solvers developed in the field of mathematical programming be directly available in the field of computational plasticity. Finally, a semi-smooth Newton algorithm is suggested to solve the obtained SOCCPs. Numerical results of several classical plasticity benchmark problems confirm the effectiveness and robustness of the SOCCP approach.
Using regression mixture models with non-normal data: Examining an ordered polytomous approach
George, Melissa R. W.; Yang, Na; Smith, Jessalyn; Jaki, Thomas; Feaster, Dan; Masyn, Katherine; Howe, George
2012-01-01
Mild to moderate skew in errors can substantially impact regression mixture model results; one approach for overcoming this includes transforming the outcome into an ordered categorical variable and using a polytomous regression mixture model. This is effective for retaining differential effects in the population; however, bias in parameter estimates and model fit warrant further examination of this approach at higher levels of skew. The current study used Monte Carlo simulations; three thousand observations were drawn from each of two subpopulations differing in the effect of X on Y. Five hundred simulations were performed in each of the ten scenarios varying in levels of skew in one or both classes. Model comparison criteria supported the accurate two class model, preserving the differential effects, while parameter estimates were notably biased. The appropriate number of effects can be captured with this approach but we suggest caution when interpreting the magnitude of the effects. PMID:23687397
Markov Models and the Ensemble Kalman Filter for Estimation of Sorption Rates
NASA Astrophysics Data System (ADS)
Vugrin, E. D.; McKenna, S. A.; White Vugrin, K.
2007-12-01
Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude. Sandia is a multi program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000. This work was supported under the Sandia Laboratory Directed Research and Development program.
A Second Law for Open Markov Processes
NASA Astrophysics Data System (ADS)
Pollard, Blake S.
2016-03-01
In this paper we define the notion of an open Markov process. An open Markov process is a generalization of an ordinary Markov process in which populations are allowed to flow in and out of the system at certain boundary states. We show that the rate of change of relative entropy in an open Markov process is less than or equal to the flow of relative entropy through its boundary states. This can be viewed as a generalization of the Second Law for open Markov processes. In the case of a Markov process whose equilibrium obeys detailed balance, this inequality puts an upper bound on the rate of change of the free energy for any non-equilibrium distribution.
Growth and Dissolution of Macromolecular Markov Chains
NASA Astrophysics Data System (ADS)
Gaspard, Pierre
2016-07-01
The kinetics and thermodynamics of free living copolymerization are studied for processes with rates depending on k monomeric units of the macromolecular chain behind the unit that is attached or detached. In this case, the sequence of monomeric units in the growing copolymer is a kth-order Markov chain. In the regime of steady growth, the statistical properties of the sequence are determined analytically in terms of the attachment and detachment rates. In this way, the mean growth velocity as well as the thermodynamic entropy production and the sequence disorder can be calculated systematically. These different properties are also investigated in the regime of depolymerization where the macromolecular chain is dissolved by the surrounding solution. In this regime, the entropy production is shown to satisfy Landauer's principle.
Markov counting models for correlated binary responses.
Crawford, Forrest W; Zelterman, Daniel
2015-07-01
We propose a class of continuous-time Markov counting processes for analyzing correlated binary data and establish a correspondence between these models and sums of exchangeable Bernoulli random variables. Our approach generalizes many previous models for correlated outcomes, admits easily interpretable parameterizations, allows different cluster sizes, and incorporates ascertainment bias in a natural way. We demonstrate several new models for dependent outcomes and provide algorithms for computing maximum likelihood estimates. We show how to incorporate cluster-specific covariates in a regression setting and demonstrate improved fits to well-known datasets from familial disease epidemiology and developmental toxicology. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Markov state models of biomolecular conformational dynamics
Chodera, John D.; Noé, Frank
2014-01-01
It has recently become practical to construct Markov state models (MSMs) that reproduce the long-time statistical conformational dynamics of biomolecules using data from molecular dynamics simulations. MSMs can predict both stationary and kinetic quantities on long timescales (e.g. milliseconds) using a set of atomistic molecular dynamics simulations that are individually much shorter, thus addressing the well-known sampling problem in molecular dynamics simulation. In addition to providing predictive quantitative models, MSMs greatly facilitate both the extraction of insight into biomolecular mechanism (such as folding and functional dynamics) and quantitative comparison with single-molecule and ensemble kinetics experiments. A variety of methodological advances and software packages now bring the construction of these models closer to routine practice. Here, we review recent progress in this field, considering theoretical and methodological advances, new software tools, and recent applications of these approaches in several domains of biochemistry and biophysics, commenting on remaining challenges. PMID:24836551
Markov and semi-Markov processes as a failure rate
NASA Astrophysics Data System (ADS)
Grabski, Franciszek
2016-06-01
In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.
Markov and semi-Markov processes as a failure rate
Grabski, Franciszek
2016-06-08
In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies.
A New Approach for Constructing Highly Stable High Order CESE Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2010-01-01
A new approach is devised to construct high order CESE schemes which would avoid the common shortcomings of traditional high order schemes including: (a) susceptibility to computational instabilities; (b) computational inefficiency due to their local implicit nature (i.e., at each mesh points, need to solve a system of linear/nonlinear equations involving all the mesh variables associated with this mesh point); (c) use of large and elaborate stencils which complicates boundary treatments and also makes efficient parallel computing much harder; (d) difficulties in applications involving complex geometries; and (e) use of problem-specific techniques which are needed to overcome stability problems but often cause undesirable side effects. In fact it will be shown that, with the aid of a conceptual leap, one can build from a given 2nd-order CESE scheme its 4th-, 6th-, 8th-,... order versions which have the same stencil and same stability conditions of the 2nd-order scheme, and also retain all other advantages of the latter scheme. A sketch of multidimensional extensions will also be provided.
Dominant pole placement with fractional order PID controllers: D-decomposition approach.
Mandić, Petar D; Šekara, Tomislav B; Lazarević, Mihailo P; Bošković, Marko
2017-03-01
Dominant pole placement is a useful technique designed to deal with the problem of controlling a high order or time-delay systems with low order controller such as the PID controller. This paper tries to solve this problem by using D-decomposition method. Straightforward analytic procedure makes this method extremely powerful and easy to apply. This technique is applicable to a wide range of transfer functions: with or without time-delay, rational and non-rational ones, and those describing distributed parameter systems. In order to control as many different processes as possible, a fractional order PID controller is introduced, as a generalization of classical PID controller. As a consequence, it provides additional parameters for better adjusting system performances. The design method presented in this paper tunes the parameters of PID and fractional PID controller in order to obtain good load disturbance response with a constraint on the maximum sensitivity and sensitivity to noise measurement. Good set point response is also one of the design goals of this technique. Numerous examples taken from the process industry are given, and D-decomposition approach is compared with other PID optimization methods to show its effectiveness.
Reliability characteristics in semi-Markov models
NASA Astrophysics Data System (ADS)
Grabski, Franciszek
2017-07-01
A semi-Markov (SM) process is defined by a renewal kernel and an initial distribution of states or another equivalent parameters. Those quantities contain full information about the process and they allow us to find many characteristics and parameters of the process. Constructing the semi-Markov reliability model means building the kernel of the process based on some assumptions. Many characteristics and parameters of the SM process have a natural interpretation in the semi-Markov reliability model.
Harmonic spectral components in time sequences of Markov correlated events
NASA Astrophysics Data System (ADS)
Mazzetti, Piero; Carbone, Anna
2017-07-01
The paper concerns the analysis of the conditions allowing time sequences of Markov correlated events give rise to a line power spectrum having a relevant physical interest. It is found that by specializing the Markov matrix in order to represent closed loop sequences of events with arbitrary distribution, generated in a steady physical condition, a large set of line spectra, covering all possible frequency values, is obtained. The amplitude of the spectral lines is given by a matrix equation based on a generalized Markov matrix involving the Fourier transform of the distribution functions representing the time intervals between successive events of the sequence. The paper is a complement of a previous work where a general expression for the continuous power spectrum was given. In that case the Markov matrix was left in a more general form, thus preventing the possibility of finding line spectra of physical interest. The present extension is also suggested by the interest of explaining the emergence of a broad set of waves found in the electro and magneto-encephalograms, whose frequency ranges from 0.5 to about 40Hz, in terms of the effects produced by chains of firing neurons within the complex neural network of the brain. An original model based on synchronized closed loop sequences of firing neurons is proposed, and a few numerical simulations are reported as an application of the above cited equation.
A compositional framework for Markov processes
NASA Astrophysics Data System (ADS)
Baez, John C.; Fong, Brendan; Pollard, Blake S.
2016-03-01
We define the concept of an "open" Markov process, or more precisely, continuous-time Markov chain, which is one where probability can flow in or out of certain states called "inputs" and "outputs." One can build up a Markov process from smaller open pieces. This process is formalized by making open Markov processes into the morphisms of a dagger compact category. We show that the behavior of a detailed balanced open Markov process is determined by a principle of minimum dissipation, closely related to Prigogine's principle of minimum entropy production. Using this fact, we set up a functor mapping open detailed balanced Markov processes to open circuits made of linear resistors. We also describe how to "black box" an open Markov process, obtaining the linear relation between input and output data that holds in any steady state, including nonequilibrium steady states with a nonzero flow of probability through the system. We prove that black boxing gives a symmetric monoidal dagger functor sending open detailed balanced Markov processes to Lagrangian relations between symplectic vector spaces. This allows us to compute the steady state behavior of an open detailed balanced Markov process from the behaviors of smaller pieces from which it is built. We relate this black box functor to a previously constructed black box functor for circuits.
Behavior Detection using Confidence Intervals of Hidden Markov Models
Griffin, Christopher H
2009-01-01
Markov models are commonly used to analyze real-world problems. Their combination of discrete states and stochastic transitions is suited to applications with deterministic and stochastic components. Hidden Markov Models (HMMs) are a class of Markov model commonly used in pattern recognition. Currently, HMMs recognize patterns using a maximum likelihood approach. One major drawback with this approach is that data observations are mapped to HMMs without considering the number of data samples available. Another problem is that this approach is only useful for choosing between HMMs. It does not provide a criteria for determining whether or not a given HMM adequately matches the data stream. In this work, we recognize complex behaviors using HMMs and confidence intervals. The certainty of a data match increases with the number of data samples considered. Receiver Operating Characteristic curves are used to find the optimal threshold for either accepting or rejecting a HMM description. We present one example using a family of HMM's to show the utility of the proposed approach. A second example using models extracted from a database of consumer purchases provides additional evidence that this approach can perform better than existing techniques.
Fuzzy Markov random fields versus chains for multispectral image segmentation.
Salzenstein, Fabien; Collet, Christophe
2006-11-01
This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.
Operations and support cost modeling using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit
1989-01-01
Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.
Comparing quantum versus Markov random walk models of judgements measured by rating scales
Wang, Z.; Busemeyer, J. R.
2016-01-01
Quantum and Markov random walk models are proposed for describing how people evaluate stimuli using rating scales. To empirically test these competing models, we conducted an experiment in which participants judged the effectiveness of public health service announcements from either their own personal perspective or from the perspective of another person. The order of the self versus other judgements was manipulated, which produced significant sequential effects. The quantum and Markov models were fitted to the data using the same number of parameters, and the model comparison strongly supported the quantum over the Markov model. PMID:26621984
Modeling an alkaline electrolysis cell through reduced-order and loss-estimate approaches
NASA Astrophysics Data System (ADS)
Milewski, Jaroslaw; Guandalini, Giulio; Campanari, Stefano
2014-12-01
The paper presents two approaches to the mathematical modeling of an Alkaline Electrolyzer Cell. The presented models were compared and validated against available experimental results taken from a laboratory test and against literature data. The first modeling approach is based on the analysis of estimated losses due to the different phenomena occurring inside the electrolytic cell, and requires careful calibration of several specific parameters (e.g. those related to the electrochemical behavior of the electrodes) some of which could be hard to define. An alternative approach is based on a reduced-order equivalent circuit, resulting in only two fitting parameters (electrodes specific resistance and parasitic losses) and calculation of the internal electric resistance of the electrolyte. Both models yield satisfactory results with an average error limited below 3% vs. the considered experimental data and show the capability to describe with sufficient accuracy the different operating conditions of the electrolyzer; the reduced-order model could be preferred thanks to its simplicity for implementation within plant simulation tools dealing with complex systems, such as electrolyzers coupled with storage facilities and intermittent renewable energy sources.
Bayesian Markov models consistently outperform PWMs at predicting motifs in nucleotide sequences
Siebert, Matthias; Söding, Johannes
2016-01-01
Position weight matrices (PWMs) are the standard model for DNA and RNA regulatory motifs. In PWMs nucleotide probabilities are independent of nucleotides at other positions. Models that account for dependencies need many parameters and are prone to overfitting. We have developed a Bayesian approach for motif discovery using Markov models in which conditional probabilities of order k − 1 act as priors for those of order k. This Bayesian Markov model (BaMM) training automatically adapts model complexity to the amount of available data. We also derive an EM algorithm for de-novo discovery of enriched motifs. For transcription factor binding, BaMMs achieve significantly (P = 1/16) higher cross-validated partial AUC than PWMs in 97% of 446 ChIP-seq ENCODE datasets and improve performance by 36% on average. BaMMs also learn complex multipartite motifs, improving predictions of transcription start sites, polyadenylation sites, bacterial pause sites, and RNA binding sites by 26–101%. BaMMs never performed worse than PWMs. These robust improvements argue in favour of generally replacing PWMs by BaMMs. PMID:27288444
Mercurio, Mark R; Murray, Peter D; Gross, Ian
2014-02-01
A unilateral do not attempt resuscitation (DNAR) order is written by a physician without permission or assent from the patient or the patient's surrogate decision-maker. Potential justifications for the use of DNAR orders in pediatrics include the belief that attempted resuscitation offers no benefit to the patient or that the burdens would far outweigh the potential benefits. Another consideration is the patient's right to mercy, not to be made to undergo potentially painful interventions very unlikely to benefit the patient, and the physician's parallel obligation not to perform such interventions. Unilateral DNAR orders might be motivated in part by the moral distress caregivers sometimes experience when feeling forced by parents to participate in interventions that they believe are useless or cruel. Furthermore, some physicians believe that making these decisions without parental approval could spare parents needless additional emotional pain or a sense of guilt from making such a decision, particularly when imminent death is unavoidable. There are, however, several risks inherent in unilateral DNAR orders, such as overestimating one's ability to prognosticate or giving undue weight to the physician's values over those of parents, particularly with regard to predicted disability and quality of life. The law on the question of unilateral DNAR varies among states, and readers are encouraged to learn the law where they practice. Arguments in favor of, and opposed to, the use of unilateral DNAR orders are presented. In some settings, particularly when death is imminent regardless of whether resuscitation is attempted, unilateral DNAR orders should be viewed as an ethically permissible approach.
Nonlinear and higher-order approaches to the encoding of natural scenes.
Zetzsche, Christoph; Nuding, Ulrich
2005-01-01
Linear operations can only partially exploit the statistical redundancies of natural scenes, and nonlinear operations are ubiquitous in visual cortex. However, neither the detailed function of the nonlinearities nor the higher-order image statistics are yet fully understood. We suggest that these complicated issues can not be tackled by one single approach, but require a range of methods, and the understanding of the crosslinks between the results. We consider three basic approaches: (i) State space descriptions can theoretically provide complete information about statistical properties and nonlinear operations, but their practical usage is confined to very low-dimensional settings. We discuss the use of representation-related state-space coordinates (multivariate wavelet statistics) and of basic nonlinear coordinate transformations of the state space (e.g., a polar transform). (ii) Indirect methods, like unsupervised learning in multi-layer networks, provide complete optimization results, but no direct information on the statistical properties, and no simple model structures. (iii) Approximation by lower-order terms of power-series expansions is a classical strategy that has not yet received broad attention. On the statistical side, this approximation amounts to cumulant functions and higher-order spectra (polyspectra), on the processing side to Volterra Wiener systems. In this context we suggest that an important concept for the understanding of natural scene statistics, of nonlinear neurons, and of biological pattern recognition can be found in AND-like combinations of frequency components. We investigate how the different approaches can be related to each other, how they can contribute to the understanding of cortical nonlinearities such as complex cells, cortical gain control, end-stopping and other extraclassical receptive field properties, and how we can obtain a nonlinear perspective on overcomplete representations and invariant coding in visual cortex.
Zhang, J
1996-01-01
The Gibbs-Bogoliubov-Feynman (GBF) inequality of statistical mechanics is adopted, with an information-theoretic interpretation, as a general optimization framework for deriving and examining various mean field approximations for Markov random fields (MRF's). The efficacy of this approach is demonstrated through the compound Gauss-Markov (CGM) model, comparisons between different mean field approximations, and experimental results in image restoration.
Kim, Daeok; Coskun, Ali
2017-03-29
Controlling the arrangement of different metal ions to achieve ordered heterogeneity in metal-organic frameworks (MOFs) has been a great challenge. Herein, we introduce a template-directed approach, in which a 1D metal-organic polymer incorporating well-defined binding pockets for the secondary metal ions used as a structural template and starting material for the preparation of well-ordered bimetallic MOF-74s under heterogeneous-phase hydrothermal reaction conditions in the presence of secondary metal ions such as Ni(2+) and Mg(2+) in 3 h. The resulting bimetallic MOF-74s were found to possess a nearly 1:1 metal ratio regardless of their initial stoichiometry in the reaction mixture, thus demonstrating the possibility of controlling the arrangement of metal ions within the secondary building blocks in MOFs to tune their intrinsic properties such as gas affinity.
Bearing fault identification by higher order energy operator fusion: A non-resonance based approach
NASA Astrophysics Data System (ADS)
Faghidi, H.; Liang, M.
2016-10-01
We report a non-resonance based approach to bearing fault detection. This is achieved by a higher order energy operator fusion (HOEO_F) method. In this method, multiple higher order energy operators are fused to form a single simple transform to process the bearing signal obscured by noise and vibration interferences. The fusion is guided by entropy minimization. Unlike the popular high frequency resonance technique, this method does not require the information of resonance excited by the bearing fault. The effects of the HOEO_F method on signal-to-noise ratio (SNR) and signal-to-interference ratio (SIR) are illustrated in this paper. The performance of the proposed method in handling noise and interferences has been examined using both simulated and experimental data. The results indicate that the HOEO_F method outperforms both the envelope method and the original energy operator method.
Mohanasubha, R; Chandrasekar, V K; Senthilvelan, M; Lakshmanan, M
2015-04-08
We unearth the interconnection between various analytical methods which are widely used in the current literature to identify integrable nonlinear dynamical systems described by third-order nonlinear ODEs. We establish an important interconnection between the extended Prelle-Singer procedure and λ-symmetries approach applicable to third-order ODEs to bring out the various linkages associated with these different techniques. By establishing this interconnection we demonstrate that given any one of the quantities as a starting point in the family consisting of Jacobi last multipliers, Darboux polynomials, Lie point symmetries, adjoint-symmetries, λ-symmetries, integrating factors and null forms one can derive the rest of the quantities in this family in a straightforward and unambiguous manner. We also illustrate our findings with three specific examples.
H∞ synchronization of uncertain fractional order chaotic systems: adaptive fuzzy approach.
Lin, Tsung-Chih; Kuo, Chia-Hao
2011-10-01
This paper presents a novel adaptive fuzzy logic controller (FLC) equipped with an adaptive algorithm to achieve H(∞) synchronization performance for uncertain fractional order chaotic systems. In order to handle the high level of uncertainties and noisy training data, a desired synchronization error can be attenuated to a prescribed level by incorporating fuzzy control design and H(∞) tracking approach. Based on a Lyapunov stability criterion, not only the performance of the proposed method is satisfying with an acceptable synchronization error level, but also a rather simple stability analysis is performed. The simulation results signify the effectiveness of the proposed control scheme. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
A second order residual based predictor-corrector approach for time dependent pollutant transport
NASA Astrophysics Data System (ADS)
Pavan, S.; Hervouet, J.-M.; Ricchiuto, M.; Ata, R.
2016-08-01
We present a second order residual distribution scheme for scalar transport problems in shallow water flows. The scheme, suitable for the unsteady cases, is obtained adapting to the shallow water context the explicit Runge-Kutta schemes for scalar equations [1]. The resulting scheme is decoupled from the hydrodynamics yet the continuity equation has to be considered in order to respect some important numerical properties at discrete level. Beyond the classical characteristics of the residual formulation presented in [1,2], we introduce the possibility to iterate the corrector step in order to improve the accuracy of the scheme. Another novelty is that the scheme is based on a precise monotonicity condition which guarantees the respect of the maximum principle. We thus end up with a scheme which is mass conservative, second order accurate and monotone. These properties are checked in the numerical tests, where the proposed approach is also compared to some finite volume schemes on unstructured grids. The results obtained show the interest in adopting the predictor-corrector scheme for pollutant transport applications, where conservation of the mass, monotonicity and accuracy are the most relevant concerns.
Approach Detect Sensor System by Second Order Derivative of Laser Irradiation Area
NASA Astrophysics Data System (ADS)
Hayashi, Tomohide; Yano, Yoshikazu; Tsuda, Norio; Yamada, Jun
In recent years, as a result of a large amount of greenhouse gas emission, atmosphere temperature at ground level gradually rises. Therefore the Kyoto Protocol was adopted to solve the matter in 1997. By the energy-saving law amended in 1999, it is advisable that an escalator is controlled to pause during no user. Now a photo-electric sensor is used to control escalator, but a pole to install the sensor is needed. Then, a new type of approach detection sensor using laser diode, CCD camera and CPLD, which can be built-in escalator, has been studied. This sensor can derive the irradiated area of laser beam by simple processing in which the laser beam is irradiated in only the odd field of the interlace video signal. By second order derivative of laser irradiated area, this sensor can detect only the approaching target but can not detect the target which crosses and stands in the sensing area.
The Core-Shell Approach to Formation of Ordered Nanoporous Materials
Chang, Jeong H.; Wang, Li Q.; Shin, Yongsoon; Jeong, Byeongmoon; Birnbaum, Jerome C.; Exarhos, Gregory J.
2002-03-04
This work describes a novel core-shell approach for the preparation of ordered nanoporous ceramic materials that involve a self-assembly process at the molecular level using MPEG-b-PDLLA bloack copolymers. This approach provides for rapid self-assembly and structural reorganization at room temperature. Selected MPEG-b-PDLLA block copolymers were synthesized with systematic variation of the chain lengths of the resident hydrophilic and hydrophobic blocks. This allows the micelle size to be systematically varied. Results from this work are used to understand the formation mechanism of nanoporous structures in which the pore size and wall thickness are closely dependent on the size of hydrophobic cores and hydrophilic shells of the block copolymer templates. The core-shell mechanism for nanoporous structure evolution is based on the size and contrasting micellar packing arrangements that are controlled by the copolymer.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron
NASA Astrophysics Data System (ADS)
Gaul, Konstantin; Berger, Robert
2017-07-01
A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
A preference-ordered discrete-gaming approach to air-combat analysis
NASA Technical Reports Server (NTRS)
Kelley, H. J.; Lefton, L.
1978-01-01
An approach to one-on-one air-combat analysis is described which employs discrete gaming of a parameterized model featuring choice between several closed-loop control policies. A preference-ordering formulation due to Falco is applied to rational choice between outcomes: win, loss, mutual capture, purposeful disengagement, draw. Approximate optimization is provided by an active-cell scheme similar to Falco's obtained by a 'backing up' process similar to that of Kopp. The approach is designed primarily for short-duration duels between craft with large-envelope weaponry. Some illustrative computations are presented for an example modeled using constant-speed vehicles and very rough estimation of energy shifts.
An ordered subset approach to including covariates in the transmission disequilibrium test.
Perdry, Hervé; Maher, Brion S; Babron, Marie-Claude; McHenry, Toby; Clerget-Darpoux, Françoise; Marazita, Mary L
2007-01-01
Clinical heterogeneity of a disease may reflect an underlying genetic heterogeneity, which may hinder the detection of trait loci. Consequently, many statistical methods have been developed that allow for the detection of linkage and/or association signals in the presence of heterogeneity.This report describes the work of two parallel investigations into similar approaches to ordered subset analysis, based on an observed covariate, in the framework of family-based association analysis using Genetic Analysis Workshop 15 simulated data.With an appropriate choice of covariate, both approaches allow detection of two loci that are undetectable by the classical transmission-disequilibrium test. For a third locus, detectable by the classical transmission-disequilibrium test, a substantial increase of power of detection is shown.
A preference-ordered discrete-gaming approach to air-combat analysis
NASA Technical Reports Server (NTRS)
Kelley, H. J.; Lefton, L.
1978-01-01
An approach to one-on-one air-combat analysis is described which employs discrete gaming of a parameterized model featuring choice between several closed-loop control policies. A preference-ordering formulation due to Falco is applied to rational choice between outcomes: win, loss, mutual capture, purposeful disengagement, draw. Approximate optimization is provided by an active-cell scheme similar to Falco's obtained by a 'backing up' process similar to that of Kopp. The approach is designed primarily for short-duration duels between craft with large-envelope weaponry. Some illustrative computations are presented for an example modeled using constant-speed vehicles and very rough estimation of energy shifts.
Shpynov, S; Pozdnichenko, N; Gumenuk, A
2015-01-01
Genome sequences of 36 Rickettsia and Orientia were analyzed using Formal Order Analysis (FOA). This approach takes into account arrangement of nucleotides in each sequence. A numerical characteristic, the average distance (remoteness) - "g" was used to compare of genomes. Our results corroborated previous separation of three groups within the genus Rickettsia, including typhus group, classic spotted fever group, and the ancestral group and Orientia as a separate genus. Rickettsia felis URRWXCal2 and R. akari Hartford were not in the same group based on FOA, therefore designation of a so-called transitional Rickettsia group could not be confirmed with this approach. Copyright © 2015 Institut Pasteur. Published by Elsevier Masson SAS. All rights reserved.
Arbitrary Lagrangian-Eulerian approach in reduced order modeling of a flow with a moving boundary
NASA Astrophysics Data System (ADS)
Stankiewicz, W.; Roszak, R.; Morzyński, M.
2013-06-01
Flow-induced deflections of aircraft structures result in oscillations that might turn into such a dangerous phenomena like flutter or buffeting. In this paper the design of an aeroelastic system consisting of Reduced Order Model (ROM) of the flow with a moving boundary is presented. The model is based on Galerkin projection of governing equation onto space spanned by modes obtained from high-fidelity computations. The motion of the boundary and mesh is defined in Arbitrary Lagrangian-Eulerian (ALE) approach and results in additional convective term in Galerkin system. The developed system is demonstrated on the example of a flow around an oscillating wing.
Mixed approach to incorporate self-consistency into order-N LCAO methods
Ordejon, P.; Artacho, E.; Soler, J.M.
1996-12-31
The authors present a method for selfconsistent Density Functional Theory calculations in which the effort required is proportional to the size of the system, thus allowing the application to problems with a very large size. The method is based on the LCAO approximation, and uses a mixed approach to obtain the Hamiltonian integrals between atomic orbitals with Order-N effort. They show the performance and the convergence properties of the method in several silicon and carbon systems, and in a DNA periodic chain.
Action approach to cosmological perturbations: the second-order metric in matter dominance
Boubekeur, Lotfi; Creminelli, Paolo; Vernizzi, Filippo; Norena, Jorge
2008-08-15
We study nonlinear cosmological perturbations during post-inflationary evolution, using the equivalence between a perfect barotropic fluid and a derivatively coupled scalar field with Lagrangian [-({partial_derivative}{phi}){sup 2}]{sup (1+w)/2w}. Since this Lagrangian is just a special case of k-inflation, this approach is analogous to the one employed in the study of non-Gaussianities from inflation. We use this method to derive the second-order metric during matter dominance in the comoving gauge directly as a function of the primordial inflationary perturbation {zeta}. Going to Poisson gauge, we recover the metric previously derived in the literature.
A general approach to develop reduced order models for simulation of solid oxide fuel cell stacks
Pan, Wenxiao; Bao, Jie; Lo, Chaomei; Lai, Canhai; Agarwal, Khushbu; Koeppel, Brian J.; Khaleel, Mohammad A.
2013-06-15
A reduced order modeling approach based on response surface techniques was developed for solid oxide fuel cell stacks. This approach creates a numerical model that can quickly compute desired performance variables of interest for a stack based on its input parameter set. The approach carefully samples the multidimensional design space based on the input parameter ranges, evaluates a detailed stack model at each of the sampled points, and performs regression for selected performance variables of interest to determine the responsive surfaces. After error analysis to ensure that sufficient accuracy is established for the response surfaces, they are then implemented in a calculator module for system-level studies. The benefit of this modeling approach is that it is sufficiently fast for integration with system modeling software and simulation of fuel cell-based power systems while still providing high fidelity information about the internal distributions of key variables. This paper describes the sampling, regression, sensitivity, error, and principal component analyses to identify the applicable methods for simulating a planar fuel cell stack.
NASA Astrophysics Data System (ADS)
Cao, Yanhua; Zhu, Jiang; Navon, I. M.; Luo, Zhendong
2007-04-01
Four-dimensional variational data assimilation (4DVAR) is a powerful tool for data assimilation in meteorology and oceanography. However, a major hurdle in use of 4DVAR for realistic general circulation models is the dimension of the control space (generally equal to the size of the model state variable and typically of order 107-108) and the high computational cost in computing the cost function and its gradient that require integration model and its adjoint model.In this paper, we propose a 4DVAR approach based on proper orthogonal decomposition (POD). POD is an efficient way to carry out reduced order modelling by identifying the few most energetic modes in a sequence of snapshots from a time-dependent system, and providing a means of obtaining a low-dimensional description of the system's dynamics. The POD-based 4DVAR not only reduces the dimension of control space, but also reduces the size of dynamical model, both in dramatic ways. The novelty of our approach also consists in the inclusion of adaptability, applied when in the process of iterative control the new control variables depart significantly from the ones on which the POD model was based upon. In addition, these approaches also allow to conveniently constructing the adjoint model.The proposed POD-based 4DVAR methods are tested and demonstrated using a reduced gravity wave ocean model in Pacific domain in the context of identical twin data assimilation experiments. A comparison with data assimilation experiments in the full model space shows that with an appropriate selection of the basis functions the optimization in the POD space is able to provide accurate results at a reduced computational cost. The POD-based 4DVAR methods have the potential to approximate the performance of full order 4DVAR with less than 1/100 computer time of the full order 4DVAR. The HFTN (Hessian-free truncated-Newton)algorithm benefits most from the order reduction (see (Int. J. Numer. Meth. Fluids, in press)) since
Yan, Zhiguo; Song, Yunxia; Park, Ju H
2017-05-01
This paper is concerned with the problems of finite-time stability and stabilization for stochastic Markov systems with mode-dependent time-delays. In order to reduce conservatism, a mode-dependent approach is utilized. Based on the derived stability conditions, state-feedback controller and observer-based controller are designed, respectively. A new N-mode algorithm is given to obtain the maximum value of time-delay. Finally, an example is used to show the merit of the proposed results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Saha, Sanjib; Carlsson, Katarina Steen; Gerdtham, Ulf-G; Eriksson, Margareta K.; Hagberg, Lars; Eliasson, Mats; Johansson, Pia
2013-01-01
Background Lifestyle interventions affect patients’ risk factors for metabolic syndrome (MeSy), a pre-stage to cardiovascular diseases, diabetes and related complications. An effective lifestyle intervention is the Swedish Björknäs intervention, a 3-year randomized controlled trial in primary care for MeSy patients. To include future disease-related cost and health consequences in a cost-effectiveness analysis, a simulation model was used to estimate the short-term (3-year) and long-term (lifelong) cost-effectiveness of the Björknäs study. Methodology/ Principal Findings A Markov micro-simulation model was used to predict the cost and quality-adjusted life years (QALYs) for MeSy-related diseases based on ten risk factors. Model inputs were levels of individual risk factors at baseline and at the third year. The model estimated short-term and long-term costs and QALYs for the intervention and control groups. The cost-effectiveness of the intervention was assessed using differences-in-differences approach to compare the changes between the groups in the health care and societal perspectives, using a 3% discount rate. A 95% confidence interval (CI), based on bootstrapping, and sensitivity analyses describe the uncertainty in the estimates. In the short-term, costs are predicted to increase over time in both groups, but less in the intervention group, resulting in an average cost saving/reduction of US$-700 (in 2012, US$1=six point five seven SEK) and US$-500, in the societal and health care perspectives. The long-term estimate also predicts increased costs, but considerably less in the intervention group: US$-7,300 (95% CI: US$-19,700 to US$-1,000) in the societal, and US$-1,500 (95% CI: US$-5,400 to US$2,650) in the health care perspective. As intervention costs were US$211 per participant, the intervention would result in cost saving. Furthermore, in the long-term an estimated 0.46 QALYs (95% CI: 0.12 to 0.69) per participant would be gained. Conclusions
Adaptive relaxation for the steady-state analysis of Markov chains
NASA Technical Reports Server (NTRS)
Horton, Graham
1994-01-01
We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.
Liouville equation and Markov chains: epistemological and ontological probabilities
NASA Astrophysics Data System (ADS)
Costantini, D.; Garibaldi, U.
2006-06-01
The greatest difficulty of a probabilistic approach to the foundations of Statistical Mechanics lies in the fact that for a system ruled by classical or quantum mechanics a basic description exists, whose evolution is deterministic. For such a system any kind of irreversibility is impossible in principle. The probability used in this approach is epistemological. On the contrary for irreducible aperiodic Markov chains the invariant measure is reached with probability one whatever the initial conditions. Almost surely the uniform distributions, on which the equilibrium treatment of quantum and classical perfect gases is based, are reached when time goes by. The transition probability for binary collision, deduced by the Ehrenfest-Brillouin model, points out an irreducible aperiodic Markov chain and thus an equilibrium distribution. This means that we are describing the temporal probabilistic evolution of the system. The probability involved in this evolution is ontological.
Markov chain modeling of polymer translocation through pores
NASA Astrophysics Data System (ADS)
Mondaini, Felipe; Moriconi, L.
2011-09-01
We solve the Chapman-Kolmogorov equation and study the exact splitting probabilities of the general stochastic process which describes polymer translocation through membrane pores within the broad class of Markov chains. Transition probabilities, which satisfy a specific balance constraint, provide a refinement of the Chuang-Kantor-Kardar relaxation picture of translocation, allowing us to investigate finite size effects in the evaluation of dynamical scaling exponents. We find that (i) previous Langevin simulation results can be recovered only if corrections to the polymer mobility exponent are taken into account and (ii) the dynamical scaling exponents have a slow approach to their predicted asymptotic values as the polymer's length increases. We also address, along with strong support from additional numerical simulations, a critical discussion which points in a clear way the viability of the Markov chain approach put forward in this work.
Crossing Over…Markov Meets Mendel
Mneimneh, Saad
2012-01-01
Chromosomal crossover is a biological mechanism to combine parental traits. It is perhaps the first mechanism ever taught in any introductory biology class. The formulation of crossover, and resulting recombination, came about 100 years after Mendel's famous experiments. To a great extent, this formulation is consistent with the basic genetic findings of Mendel. More importantly, it provides a mathematical insight for his two laws (and corrects them). From a mathematical perspective, and while it retains similarities, genetic recombination guarantees diversity so that we do not rapidly converge to the same being. It is this diversity that made the study of biology possible. In particular, the problem of genetic mapping and linkage—one of the first efforts towards a computational approach to biology—relies heavily on the mathematical foundation of crossover and recombination. Nevertheless, as students we often overlook the mathematics of these phenomena. Emphasizing the mathematical aspect of Mendel's laws through crossover and recombination will prepare the students to make an early realization that biology, in addition to being experimental, IS a computational science. This can serve as a first step towards a broader curricular transformation in teaching biological sciences. I will show that a simple and modern treatment of Mendel's laws using a Markov chain will make this step possible, and it will only require basic college-level probability and calculus. My personal teaching experience confirms that students WANT to know Markov chains because they hear about them from bioinformaticists all the time. This entire exposition is based on three homework problems that I designed for a course in computational biology. A typical reader is, therefore, an instructional staff member or a student in a computational field (e.g., computer science, mathematics, statistics, computational biology, bioinformatics). However, other students may easily follow by omitting the
The sharp constant in Markov's inequality for the Laguerre weight
Sklyarov, Vyacheslav P
2009-06-30
We prove that the polynomial of degree n that deviates least from zero in the uniformly weighted metric with Laguerre weight is the extremal polynomial in Markov's inequality for the norm of the kth derivative. Moreover, the corresponding sharp constant does not exceed (8{sup k} n {exclamation_point} k {exclamation_point})/((n-k){exclamation_point} (2k){exclamation_point}). For the derivative of a fixed order this bound is asymptotically sharp as n{yields}{infinity}. Bibliography: 20 items.
A unidirectional approach for d-dimensional finite element methods for higher order on sparse grids
Bungartz, H.J.
1996-12-31
In the last years, sparse grids have turned out to be a very interesting approach for the efficient iterative numerical solution of elliptic boundary value problems. In comparison to standard (full grid) discretization schemes, the number of grid points can be reduced significantly from O(N{sup d}) to O(N(log{sub 2}(N)){sup d-1}) in the d-dimensional case, whereas the accuracy of the approximation to the finite element solution is only slightly deteriorated: For piecewise d-linear basis functions, e. g., an accuracy of the order O(N{sup - 2}(log{sub 2}(N)){sup d-1}) with respect to the L{sub 2}-norm and of the order O(N{sup -1}) with respect to the energy norm has been shown. Furthermore, regular sparse grids can be extended in a very simple and natural manner to adaptive ones, which makes the hierarchical sparse grid concept applicable to problems that require adaptive grid refinement, too. An approach is presented for the Laplacian on a uinit domain in this paper.
Next-to-leading order gravitational spin-orbit coupling in an effective field theory approach
Levi, Michele
2010-11-15
We use an effective field theory (EFT) approach to calculate the next-to-leading order (NLO) gravitational spin-orbit interaction between two spinning compact objects. The NLO spin-orbit interaction provides the most computationally complex sector of the NLO spin effects, previously derived within the EFT approach. In particular, it requires the inclusion of nonstationary cubic self-gravitational interaction, as well as the implementation of a spin supplementary condition (SSC) at higher orders. The EFT calculation is carried out in terms of the nonrelativistic gravitational field parametrization, making the calculation more efficient with no need to rely on automated computations, and illustrating the coupling hierarchy of the different gravitational field components to the spin and mass sources. Finally, we show explicitly how to relate the EFT derived spin results to the canonical results obtained with the Arnowitt-Deser-Misner (ADM) Hamiltonian formalism. This is done using noncanonical transformations, required due to the implementation of covariant SSC, as well as canonical transformations at the level of the Hamiltonian, with no need to resort to the equations of motion or the Dirac brackets.
New approach for identifying the zero-order fringe in variable wavelength interferometry
NASA Astrophysics Data System (ADS)
Galas, Jacek; Litwin, Dariusz; Daszkiewicz, Marek
2016-12-01
The family of VAWI techniques (for transmitted and reflected light) is especially efficient for characterizing objects, when in the interference system the optical path difference exceeds a few wavelengths. The classical approach that consists in measuring the deflection of interference fringes fails because of strong edge effects. Broken continuity of interference fringes prevents from correct identification of the zero order fringe, which leads to significant errors. The family of these methods has been proposed originally by Professor Pluta in the 1980s but that time image processing facilities and computers were hardly available. Automated devices unfold a completely new approach to the classical measurement procedures. The Institute team has taken that new opportunity and transformed the technique into fully automated measurement devices offering commercial readiness of industry-grade quality. The method itself has been modified and new solutions and algorithms simultaneously have extended the field of application. This has concerned both construction aspects of the systems and software development in context of creating computerized instruments. The VAWI collection of instruments constitutes now the core of the Institute commercial offer. It is now practically applicable in industrial environment for measuring textile and optical fibers, strips of thin films, testing of wave plates and nonlinear affects in different materials. This paper describes new algorithms for identifying the zero order fringe, which increases the performance of the system as a whole and presents some examples of measurements of optical elements.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Optimal Control of Markov Processes with Age-Dependent Transition Rates
Ghosh, Mrinal K. Saha, Subhamay
2012-10-15
We study optimal control of Markov processes with age-dependent transition rates. The control policy is chosen continuously over time based on the state of the process and its age. We study infinite horizon discounted cost and infinite horizon average cost problems. Our approach is via the construction of an equivalent semi-Markov decision process. We characterise the value function and optimal controls for both discounted and average cost cases.
Using Games to Teach Markov Chains
ERIC Educational Resources Information Center
Johnson, Roger W.
2003-01-01
Games are promoted as examples for classroom discussion of stationary Markov chains. In a game context Markov chain terminology and results are made concrete, interesting, and entertaining. Game length for several-player games such as "Hi Ho! Cherry-O" and "Chutes and Ladders" is investigated and new, simple formulas are given. Slight…
Semi-Markov Unreliability-Range Evaluator
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1988-01-01
Reconfigurable, fault-tolerant systems modeled. Semi-Markov unreliability-range evaluator (SURE) computer program is software tool for analysis of reliability of reconfigurable, fault-tolerant systems. Based on new method for computing death-state probabilities of semi-Markov model. Computes accurate upper and lower bounds on probability of failure of system. Written in PASCAL.
Building Simple Hidden Markov Models. Classroom Notes
ERIC Educational Resources Information Center
Ching, Wai-Ki; Ng, Michael K.
2004-01-01
Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.
An introduction to hidden Markov models.
Schuster-Böckler, Benjamin; Bateman, Alex
2007-06-01
This unit introduces the concept of hidden Markov models in computational biology. It describes them using simple biological examples, requiring as little mathematical knowledge as possible. The unit also presents a brief history of hidden Markov models and an overview of their current applications before concluding with a discussion of their limitations.
Using Games to Teach Markov Chains
ERIC Educational Resources Information Center
Johnson, Roger W.
2003-01-01
Games are promoted as examples for classroom discussion of stationary Markov chains. In a game context Markov chain terminology and results are made concrete, interesting, and entertaining. Game length for several-player games such as "Hi Ho! Cherry-O" and "Chutes and Ladders" is investigated and new, simple formulas are given. Slight…
An abstract specification language for Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, R. W.
1985-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
An abstract language for specifying Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1986-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
An approximation of the Cioslowski-Mixon bond order indexes using the AlteQ approach
NASA Astrophysics Data System (ADS)
Salmina, Elena; Grishina, Maria A.; Potemkin, Vladimir A.
2013-09-01
Fast and reliable prediction of bond orders in organic systems based upon experimentally measured quantities can be performed using electron density features at bond critical points (J Am Chem Soc 105:5061-5068, 1983; J Phys Org Chem 16:133-141, 2003; Acta Cryst B 61:418-428, 2005; Acta Cryst B 63:142-150, 2007). These features are outcomes of low-temperature high-resolution X-ray diffraction experiments. However, a time-consuming procedure of gaining these quantities makes the prediction limited. In the present work we have employed an empirical approach AlteQ (J Comput Aided Mol Des 22:489-505, 2008) for evaluation of electron density properties. This approach uses a simple exponential function derived from comparison of electron density, gained from high-resolution X-ray crystallography, and distance to atomic nucleus what allows calculating density distribution in time-saving manner and gives results which are very close to experimental ones. As input data AlteQ accepts atomic coordinates of isolated molecules or molecular ensembles (for instance, protein-protein complexes or complexes of small molecules with proteins, etc.). Using AlteQ characteristics we have developed regression models predicting Cioslowski-Mixon bond order (CMBO) indexes (J Am Chem Soc 113(42):4142-4145, 1991). The models are characterized by high correlation coefficients lying in the range from 0.844 to 0.988 dependently on the type of covalent bond, thereby providing a bonding quantification that is in reasonable agreement with that obtained by orbital theory. Comparative analysis of CMBOs approximated using topological properties of AlteQ and experimental electron densities has shown that the models can be used for fast determination of bond orders directly from X-ray crystallography data and confirmed that AlteQ characteristics can replace experimental ones with satisfactory extent of accuracy.
An analytical approach to estimating the first order x-ray scatter in heterogeneous medium.
Yao, Weiguang; Leszczynski, Konrad W
2009-07-01
X-ray scatter estimation in heterogeneous medium is a challenge in improving the quality of diagnostic projection images and volumetric image reconstruction. For Compton scatter, the statistical behavior of the first order scatter can be accurately described by using the Klein-Nishina expression for Compton scattering cross section provided that the exact information of the medium including the geometry and the attenuation, which in fact is unknown, is known. The authors present an approach to approximately separate the unknowns from the Klein-Nishina formula and express the unknown part by the primary x-ray intensity at the detector. The approximation is fitted to the exact solution of the Klein-Nishina formulas by introducing one parameter, whose value is shown to be not sensitive to the linear attenuation coefficient and thickness of the scatterer. The performance of the approach is evaluated by comparing the result with those from the Klein-Nishina formula and Monte Carlo simulations. The approximation is close to the exact solution and the Monte Carlo simulation result for parallel and cone beam imaging systems with various field sizes, air gaps, and mono- and polyenergy of primary photons and for nonhomogeneous scatterer with various geometries of slabs and cylinders. For a wide range of x-ray energy including those often used in kilo- and megavoltage cone beam computed tomographies, the first order scatter fluence at the detector is mainly from Compton scatter. Thus, the approximate relation between the first order scatter and primary fluences at the detector is useful for scatter estimation in physical phantom projections.
Tacholess order-tracking approach for wind turbine gearbox fault detection
NASA Astrophysics Data System (ADS)
Wang, Yi; Xie, Yong; Xu, Guanghua; Zhang, Sicong; Hou, Chenggang
2017-09-01
Monitoring of wind turbines under variable-speed operating conditions has become an important issue in recent years. The gearbox of a wind turbine is the most important transmission unit; it generally exhibits complex vibration signatures due to random variations in operating conditions. Spectral analysis is one of the main approaches in vibration signal processing. However, spectral analysis is based on a stationary assumption and thus inapplicable to the fault diagnosis of wind turbines under variable-speed operating conditions. This constraint limits the application of spectral analysis to wind turbine diagnosis in industrial applications. Although order-tracking methods have been proposed for wind turbine fault detection in recent years, current methods are only applicable to cases in which the instantaneous shaft phase is available. For wind turbines with limited structural spaces, collecting phase signals with tachometers or encoders is difficult. In this study, a tacholess order-tracking method for wind turbines is proposed to overcome the limitations of traditional techniques. The proposed method extracts the instantaneous phase from the vibration signal, resamples the signal at equiangular increments, and calculates the order spectrum for wind turbine fault identification. The effectiveness of the proposed method is experimentally validated with the vibration signals of wind turbines.
Frank, T D; Michelbrink, M; Beckmann, H; Schöllhorn, W I
2008-01-01
Differential learning is a learning concept that assists subjects to find individual optimal performance patterns for given complex motor skills. To this end, training is provided in terms of noisy training sessions that feature a large variety of between-exercises differences. In several previous experimental studies it has been shown that performance improvement due to differential learning is higher than due to traditional learning and performance improvement due to differential learning occurs even during post-training periods. In this study we develop a quantitative dynamical systems approach to differential learning. Accordingly, differential learning is regarded as a self-organized process that results in the emergence of subject- and context-dependent attractors. These attractors emerge due to noise-induced bifurcations involving order parameters in terms of learning rates. In contrast, traditional learning is regarded as an externally driven process that results in the emergence of environmentally specified attractors. Performance improvement during post-training periods is explained as an hysteresis effect. An order parameter equation for differential learning involving a fourth-order polynomial potential is discussed explicitly. New predictions concerning the relationship between traditional and differential learning are derived.
Tacholess order-tracking approach for wind turbine gearbox fault detection
NASA Astrophysics Data System (ADS)
Wang, Yi; Xie, Yong; Xu, Guanghua; Zhang, Sicong; Hou, Chenggang
2017-07-01
Monitoring of wind turbines under variable-speed operating conditions has become an important issue in recent years. The gearbox of a wind turbine is the most important transmission unit; it generally exhibits complex vibration signatures due to random variations in operating conditions. Spectral analysis is one of the main approaches in vibration signal processing. However, spectral analysis is based on a stationary assumption and thus inapplicable to the fault diagnosis of wind turbines under variable-speed operating conditions. This constraint limits the application of spectral analysis to wind turbine diagnosis in industrial applications. Although order-tracking methods have been proposed for wind turbine fault detection in recent years, current methods are only applicable to cases in which the instantaneous shaft phase is available. For wind turbines with limited structural spaces, collecting phase signals with tachometers or encoders is difficult. In this study, a tacholess order-tracking method for wind turbines is proposed to overcome the limitations of traditional techniques. The proposed method extracts the instantaneous phase from the vibration signal, resamples the signal at equiangular increments, and calculates the order spectrum for wind turbine fault identification. The effectiveness of the proposed method is experimentally validated with the vibration signals of wind turbines.
An Integral-Direct Linear-Scaling Second-Order Møller-Plesset Approach.
Nagy, Péter R; Samu, Gyula; Kállay, Mihály
2016-10-11
An integral-direct, iteration-free, linear-scaling, local second-order Møller-Plesset (MP2) approach is presented, which is also useful for spin-scaled MP2 calculations as well as for the efficient evaluation of the perturbative terms of double-hybrid density functionals. The method is based on a fragmentation approximation: the correlation contributions of the individual electron pairs are evaluated in domains constructed for the corresponding localized orbitals, and the correlation energies of distant electron pairs are computed with multipole expansions. The required electron repulsion integrals are calculated directly invoking the density fitting approximation; the storage of integrals and intermediates is avoided. The approach also utilizes natural auxiliary functions to reduce the size of the auxiliary basis of the domains and thereby the operation count and memory requirement. Our test calculations show that the approach recovers 99.9% of the canonical MP2 correlation energy and reproduces reaction energies with an average (maximum) error below 1 kJ/mol (4 kJ/mol). Our benchmark calculations demonstrate that the new method enables MP2 calculations for molecules with more than 2300 atoms and 26000 basis functions on a single processor.
A Fourth-Order Spline Collocation Approach for the Solution of a Boundary Layer Problem
NASA Astrophysics Data System (ADS)
Sayfy, Khoury, S.
2011-09-01
A finite element approach, based on cubic B-spline collocation, is presented for the numerical solution of a class of singularly perturbed two-point boundary value problems that possess a boundary layer at one or two end points. Due to the existence of a layer, the problem is handled using an adaptive spline collocation approach constructed over a non-uniform Shishkin-like meshes, defined via a carefully selected generating function. To tackle the case of nonlinearity, if it exists, an iterative scheme arising from Newton's method is employed. The rate of convergence is verified to be of fourth-order and is calculated using the double-mesh principle. The efficiency and applicability of the method are demonstrated by applying it to a number of linear and nonlinear examples. The numerical solutions are compared with both analytical and other existing numerical solutions in the literature. The numerical results confirm that this method is superior when contrasted with other accessible approaches and yields more accurate solutions.
Testing the Markov hypothesis in fluid flows
NASA Astrophysics Data System (ADS)
Meyer, Daniel W.; Saggini, Frédéric
2016-05-01
Stochastic Markov processes are used very frequently to model, for example, processes in turbulence and subsurface flow and transport. Based on the weak Chapman-Kolmogorov equation and the strong Markov condition, we present methods to test the Markov hypothesis that is at the heart of these models. We demonstrate the capabilities of our methodology by testing the Markov hypothesis for fluid and inertial particles in turbulence, and fluid particles in the heterogeneous subsurface. In the context of subsurface macrodispersion, we find that depending on the heterogeneity level, Markov models work well above a certain scale of interest for media with different log-conductivity correlation structures. Moreover, we find surprising similarities in the velocity dynamics of the different media considered.
Podlubny, Igor; Skovranek, Tomas; Vinagre Jara, Blas M; Petras, Ivo; Verbitsky, Viktor; Chen, YangQuan
2013-05-13
In this paper, we further develop Podlubny's matrix approach to discretization of integrals and derivatives of non-integer order. Numerical integration and differentiation on non-equidistant grids is introduced and illustrated by several examples of numerical solution of differential equations with fractional derivatives of constant orders and with distributed-order derivatives. In this paper, for the first time, we present a variable-step-length approach that we call 'the method of large steps', because it is applied in combination with the matrix approach for each 'large step'. This new method is also illustrated by an easy-to-follow example. The presented approach allows fractional-order and distributed-order differentiation and integration of non-uniformly sampled signals, and opens the way to development of variable- and adaptive-step-length techniques for fractional- and distributed-order differential equations.
A secure arithmetic coding based on Markov model
NASA Astrophysics Data System (ADS)
Duan, Lili; Liao, Xiaofeng; Xiang, Tao
2011-06-01
We propose a modification of the standard arithmetic coding that can be applied to multimedia coding standards at entropy coding stage. In particular, we introduce a randomized arithmetic coding scheme based on order-1 Markov model that achieves encryption by scrambling the symbols' order in the model and choosing the relevant order's probability randomly, which is done with higher compression efficiency and good security. Experimental results and security analyses indicate that the algorithm can not only resist to existing attacks based on arithmetic coding, but also be immune to other cryptanalysis.
Photoassociation of a cold-atom-molecule pair. II. Second-order perturbation approach
Lepers, M.; Vexiau, R.; Bouloufa, N.; Dulieu, O.; Kokoouline, V.
2011-04-15
The electrostatic interaction between an excited atom and a diatomic ground-state molecule in an arbitrary rovibrational level at large mutual separations is investigated with a general second-order perturbation theory, in the perspective of modeling the photoassociation between cold atoms and molecules. We find that the combination of quadrupole-quadrupole and van der Waals interactions competes with the rotational energy of the dimer, limiting the range of validity of the perturbative approach to distances larger than 100 Bohr radii. Numerical results are given for the long-range interaction between Cs and Cs{sub 2}, showing that the photoassociation is probably efficient for any Cs{sub 2} rotational energy.
A formulation of a matrix sparsity approach for the quantum ordered search algorithm
NASA Astrophysics Data System (ADS)
Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran
One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.
A parallel approach for image segmentation by numerical minimization of a second-order functional
NASA Astrophysics Data System (ADS)
Zanella, Riccardo; Zanetti, Massimo; Ruggiero, Valeria
2016-10-01
Because of its attractive features, image segmentation has shown to be a promising tool in remote sensing. A known drawback about its implementation is computational complexity. Recently in [1] an effcient numerical method has been proposed for the minimization of a second-order variational approximation of the Blake-Zissermann functional. The method is an especially tailored version of the block-coordinate descent algorithm (BCDA). In order to enable the segmentation of large-size gridded data, such as Digital Surface Models, we combine a domain decomposition technique with BCDA and a parallel interconnection rule among blocks of variables. We aim to show that a simple tiling strategy enables us to treat large images even in a commodity multicore CPU, with no need of specific post-processing on tiles junctions. From the point of view of the performance, little computational effort is required to separate data in subdomains and the running time is mainly spent in concurrently solving the independent subproblems. Numerical results are provided to evaluate the effectiveness of the proposed parallel approach.
Self-energy effects in the Polchinski and Wick-ordered renormalization-group approaches
NASA Astrophysics Data System (ADS)
Katanin, A.
2011-12-01
I discuss functional renormalization group (fRG) schemes, which allow for non-perturbative treatment of the self-energy effects and do not rely on the one-particle irreducible functional. In particular, I consider the Polchinski or Wick-ordered scheme with amputation of full (instead of bare) Green functions, as well as more general schemes, and establish their relation to the ‘dynamical adjustment propagator’ scheme by Salmhofer (2007 Ann. Phys., Lpz. 16 171). While in the Polchinski scheme the amputation of full (instead of bare) Green functions improves treatment of the self-energy effects, the structure of the corresponding equations is not suitable to treat strong-coupling problems; it is also not evident how the mean-field solution of these problems is recovered in this scheme. For the Wick-ordered scheme, fully or partly excluding tadpole diagrams one can obtain forms of fRG hierarchy, which are suitable to treat strong-coupling problems. In particular, I emphasize the usefulness of the schemes, which are local in the cutoff parameter, and compare them to the one-particle irreducible approach.
Brouwer, Darren Henry; Cadars, Sylvian; Hotke, Kathryn; Van Huizen, Jared; Van Huizen, Nicholas
2017-03-01
Structure determination of layered materials can present challenges for conventional diffraction methods due to the fact that such materials often lack full three-dimensional periodicity since adjacent layers may not stack in an orderly and regular fashion. In such cases, NMR crystallography strategies involving a combination of solid-state NMR spectroscopy, powder X-ray diffraction, and computational chemistry methods can often reveal structural details that cannot be acquired from diffraction alone. We present here the structure determination of a surfactant-templated layered silicate material that lacks full three-dimensional crystallinity using such an NMR crystallography approach. Through a combination of powder X-ray diffraction and advanced (29)Si solid-state NMR spectroscopy, it is revealed that the structure of the silicate layer of this layered silicate material templated with cetyltrimethylammonium surfactant cations is isostructural with the silicate layer of a previously reported material referred to as ilerite, octosilicate, or RUB-18. High-field (1)H NMR spectroscopy reveals differences between the materials in terms of the ordering of silanol groups on the surfaces of the layers, as well as the contents of the inter-layer space.
NASA Astrophysics Data System (ADS)
Huré, J.-M.; Hersant, F.
2017-02-01
We compute the structure of a self-gravitating torus with polytropic equation of state (EOS) rotating in an imposed centrifugal potential. The Poisson solver is based on isotropic multigrid with optimal covering factor (fluid section-to-grid area ratio). We work at second order in the grid resolution for both finite difference and quadrature schemes. For soft EOS (i.e. polytropic index n ≥ 1), the underlying second order is naturally recovered for boundary values and any other integrated quantity sensitive to the mass density (mass, angular momentum, volume, virial parameter, etc.), i.e. errors vary with the number N of nodes per direction as ˜1/N2. This is, however, not observed for purely geometrical quantities (surface area, meridional section area, volume), unless a subgrid approach is considered (i.e. boundary detection). Equilibrium sequences are also much better described, especially close to critical rotation. Yet another technical effort is required for hard EOS (n < 1), due to infinite mass density gradients at the fluid surface. We fix the problem by using kernel splitting. Finally, we propose an accelerated version of the self-consistent field (SCF) algorithm based on a node-by-node pre-conditioning of the mass density at each step. The computing time is reduced by a factor of 2 typically, regardless of the polytropic index. There is a priori no obstacle to applying these results and techniques to ellipsoidal configurations and even to 3D configurations.
Different coupled atmosphere-recharge oscillator Low Order Models for ENSO: a projection approach.
NASA Astrophysics Data System (ADS)
Bianucci, Marco; Mannella, Riccardo; Merlino, Silvia; Olivieri, Andrea
2016-04-01
El Ninõ-Southern Oscillation (ENSO) is a large scale geophysical phenomenon where, according to the celebrated recharge oscillator model (ROM), the Ocean slow variables given by the East Pacific Sea Surface Temperature (SST) and the average thermocline depth (h), interact with some fast "irrelevant" ones, representing mostly the atmosphere (the westerly wind burst and the Madden-Julian Oscillation). The fast variables are usually inserted in the model as an external stochastic forcing. In a recent work (M. Bianucci, "Analytical probability density function for the statistics of the ENSO phenomenon: asymmetry and power law tail" Geophysical Research Letters, under press) the author, using a projection approach applied to general deterministic coupled systems, gives a physically reasonable explanation for the use of stochastic models for mimicking the apparent random features of the ENSO phenomenon. Moreover, in the same paper, assuming that the interaction between the ROM and the fast atmosphere is of multiplicative type, i.e., it depends on the SST variable, an analytical expression for the equilibrium density function of the anomaly SST is obtained. This expression fits well the data from observations, reproducing the asymmetry and the power law tail of the histograms of the NINÕ3 index. Here, using the same theoretical approach, we consider and discuss different kind of interactions between the ROM and the other perturbing variables, and we take into account also non linear ROM as a low order model for ENSO. The theoretical and numerical results are then compared with data from observations.
Tensor-entanglement-filtering renormalization approach and symmetry-protected topological order
NASA Astrophysics Data System (ADS)
Gu, Zheng-Cheng; Wen, Xiao-Gang
2009-10-01
We study the renormalization group flow of the Lagrangian for statistical and quantum systems by representing their path integral in terms of a tensor network. Using a tensor-entanglement-filtering renormalization approach that removes local entanglement and produces a coarse-grained lattice, we show that the resulting renormalization flow of the tensors in the tensor network has a nice fixed-point structure. The isolated fixed-point tensors Tinv plus the symmetry group Gsym of the tensors (i.e., the symmetry group of the Lagrangian) characterize various phases of the system. Such a characterization can describe both the symmetry breaking phases and topological phases, as illustrated by two-dimensional (2D) statistical Ising model, 2D statistical loop-gas model, and 1+1D quantum spin-1/2 and spin-1 models. In particular, using such a (Gsym,Tinv) characterization, we show that the Haldane phase for a spin-1 chain is a phase protected by the time-reversal, parity, and translation symmetries. Thus the Haldane phase is a symmetry-protected topological phase. The (Gsym,Tinv) characterization is more general than the characterizations based on the boundary spins and string order parameters. The tensor renormalization approach also allows us to study continuous phase transitions between symmetry breaking phases and/or topological phases. The scaling dimensions and the central charges for the critical points that describe those continuous phase transitions can be calculated from the fixed-point tensors at those critical points.
Gaussian approach for phase ordering in nonconserved scalar systems with long-range interactions
NASA Astrophysics Data System (ADS)
Filipe, J. A. N.; Bray, A. J.
1995-01-01
We have applied the Gaussian auxiliary field method to a nonconserved scalar system with attractive long-range interactions, falling off with distance as 1/rd+σ, where d is the spatial dimension and 0<σ<2. This study provides a test bed for the approach and shows some of the difficulties encountered in constructing a closed equation for the pair correlation function. For the relation φ=φ(m) between the order parameter φ and the auxiliary field m, the usual choice of the equilibrium interfacial profile is made. The equation obtained for the equal-time two-point correlation function is studied in the limiting cases of small and large values of the scaling variable. A Porod regime at short distance and an asymptotic power-law decay at large distance are obtained. The theory is not, however, consistent with the expected growth law and attempts to retrieve the correct growth lead to inconsistencies. These results indicate a failure of the Gaussian assumption for this system, when used in the context of the bulk dynamics. This statement holds at least within the present form of the mapping φ=φ(m), which appears to be the most natural choice, as well as the one consistent with the emergence of the Porod regime. By contrast, Ohta and Hayakawa have recenlty succeeded in implementing a Gaussian approach based on the interfacial dynamics of this system [Physica A 204, 482 (1994)]. This clearly suggests that, beyond the simplicity of short-range ``model A'' dynamics, a Gaussian approach can only capture the essential physical features if the crucial role of wall motion in domain growth is explicitly considered.
Markov reliability models for digital flight control systems
NASA Technical Reports Server (NTRS)
Mcgough, John; Reibman, Andrew; Trivedi, Kishor
1989-01-01
The reliability of digital flight control systems can often be accurately predicted using Markov chain models. The cost of numerical solution depends on a model's size and stiffness. Acyclic Markov models, a useful special case, are particularly amenable to efficient numerical solution. Even in the general case, instantaneous coverage approximation allows the reduction of some cyclic models to more readily solvable acyclic models. After considering the solution of single-phase models, the discussion is extended to phased-mission models. Phased-mission reliability models are classified based on the state restoration behavior that occurs between mission phases. As an economical approach for the solution of such models, the mean failure rate solution method is introduced. A numerical example is used to show the influence of fault-model parameters and interphase behavior on system unreliability.
Taheri, Shahrooz; Mat Saman, Muhamad Zameri; Wong, Kuan Yew
2013-01-01
One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach. PMID:23864823
Azadnia, Amir Hossein; Taheri, Shahrooz; Ghadimi, Pezhman; Saman, Muhamad Zameri Mat; Wong, Kuan Yew
2013-01-01
One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.
Markov state modeling of sliding friction.
Pellegrini, F; Landes, François P; Laio, A; Prestipino, S; Tosatti, E
2016-11-01
Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.
Markov state modeling of sliding friction
NASA Astrophysics Data System (ADS)
Pellegrini, F.; Landes, François P.; Laio, A.; Prestipino, S.; Tosatti, E.
2016-11-01
Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.
Clustering metagenomic sequences with interpolated Markov models
2010-01-01
Background Sequencing of environmental DNA (often called metagenomics) has shown tremendous potential to uncover the vast number of unknown microbes that cannot be cultured and sequenced by traditional methods. Because the output from metagenomic sequencing is a large set of reads of unknown origin, clustering reads together that were sequenced from the same species is a crucial analysis step. Many effective approaches to this task rely on sequenced genomes in public databases, but these genomes are a highly biased sample that is not necessarily representative of environments interesting to many metagenomics projects. Results We present SCIMM (Sequence Clustering with Interpolated Markov Models), an unsupervised sequence clustering method. SCIMM achieves greater clustering accuracy than previous unsupervised approaches. We examine the limitations of unsupervised learning on complex datasets, and suggest a hybrid of SCIMM and supervised learning method Phymm called PHYSCIMM that performs better when evolutionarily close training genomes are available. Conclusions SCIMM and PHYSCIMM are highly accurate methods to cluster metagenomic sequences. SCIMM operates entirely unsupervised, making it ideal for environments containing mostly novel microbes. PHYSCIMM uses supervised learning to improve clustering in environments containing microbial strains from well-characterized genera. SCIMM and PHYSCIMM are available open source from http://www.cbcb.umd.edu/software/scimm. PMID:21044341
Markov chain decision model for urinary incontinence procedures.
Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha
2017-03-13
Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.
Entropy and long-range memory in random symbolic additive Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2016-06-01
The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.
NASA Astrophysics Data System (ADS)
Deo, C. S.; Srolovitz, D. J.
2002-09-01
We describe a first passage time Markov chain analysis of rare events in kinetic Monte Carlo (kMC) simulations and demonstrate how this analysis may be used to enhance kMC simulations of dislocation glide. Dislocation glide is described by the kink mechanism, which involves double kink nucleation, kink migration and kink-kink annihilation. Double kinks that nucleate on straight dislocations are unstable at small kink separations and tend to recombine immediately following nucleation. A very small fraction (<0.001) of nucleating double kinks survive to grow to a stable kink separation. The present approach replaces all of the events that lead up to the formation of a stable kink with a simple numerical calculation of the time required for stable kink formation. In this paper, we treat the double kink nucleation process as a temporally homogeneous birth-death Markov process and present a first passage time analysis of the Markov process in order to calculate the nucleation rate of a double kink with a stable kink separation. We discuss two methods to calculate the first passage time; one computes the distribution and the average of the first passage time, while the other uses a recursive relation to calculate the average first passage time. The average first passage times calculated by both approaches are shown to be in excellent agreement with direct Monte Carlo simulations for four idealized cases of double kink nucleation. Finally, we apply this approach to double kink nucleation on a screw dislocation in molybdenum and obtain the rates for formation of stable double kinks as a function of applied stress and temperature. Equivalent kMC simulations are too inefficient to be performed using commonly available computational resources.
NASA Astrophysics Data System (ADS)
Turner, Sean; Galelli, Stefano; Wilcox, Karen
2015-04-01
Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Application of Markov Graphs in Marketing
NASA Astrophysics Data System (ADS)
Bešić, C.; Sajfert, Z.; Đorđević, D.; Sajfert, V.
2007-04-01
The applications of Markov's processes theory in marketing are discussed. It was turned out that Markov's processes have wide field of applications. The advancement of marketing by the use of convolution of stationary Markov's distributions is analysed. It turned out that convolution distribution gives average net profit that is two times higher than the one obtained by usual Markov's distribution. It can be achieved if one selling chain is divided onto two parts with different ratios of output and input frequencies. The stability of marketing system was examined by the use of conforming coefficients. It was shown, by means of Jensen inequality that system remains stable if initial capital is higher than averaged losses.
Markov decision processes in natural resources management: observability and uncertainty
Williams, Byron K.
2015-01-01
The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.
Dynamic Programming for Structured Continuous Markov Decision Problems
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu
2004-01-01
We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.
Markov decision processes in natural resources management: Observability and uncertainty
Williams, B.K.
2009-01-01
The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.
Triangular Alignment (TAME). A Tensor-based Approach for Higher-order Network Alignment
Mohammadi, Shahin; Gleich, David F.; Kolda, Tamara G.; Grama, Ananth
2015-11-01
Network alignment is an important tool with extensive applications in comparative interactomics. Traditional approaches aim to simultaneously maximize the number of conserved edges and the underlying similarity of aligned entities. We propose a novel formulation of the network alignment problem that extends topological similarity to higher-order structures and provide a new objective function that maximizes the number of aligned substructures. This objective function corresponds to an integer programming problem, which is NP-hard. Consequently, we approximate this objective function as a surrogate function whose maximization results in a tensor eigenvalue problem. Based on this formulation, we present an algorithm called Triangular AlignMEnt (TAME), which attempts to maximize the number of aligned triangles across networks. We focus on alignment of triangles because of their enrichment in complex networks; however, our formulation and resulting algorithms can be applied to general motifs. Using a case study on the NAPABench dataset, we show that TAME is capable of producing alignments with up to 99% accuracy in terms of aligned nodes. We further evaluate our method by aligning yeast and human interactomes. Our results indicate that TAME outperforms the state-of-art alignment methods both in terms of biological and topological quality of the alignments.
Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe
2016-01-01
Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity.
Semi-Markov Unreliability Range Evaluator
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Semi-Markov Unreliability Range Evaluator, SURE, computer program is software tool for analysis of reconfigurable, fault-tolerant systems. Traditional reliability analyses based on aggregates of fault-handling and fault-occurrence models. SURE provides efficient means for calculating accurate upper and lower bounds for probabilities of death states for large class of semi-Markov mathematical models, and not merely those reduced to critical-pair architectures.
Markov Boundary Discovery with Ridge Regularized Linear Models
Visweswaran, Shyam
2016-01-01
Ridge regularized linear models (RRLMs), such as ridge regression and the SVM, are a popular group of methods that are used in conjunction with coefficient hypothesis testing to discover explanatory variables with a significant multivariate association to a response. However, many investigators are reluctant to draw causal interpretations of the selected variables due to the incomplete knowledge of the capabilities of RRLMs in causal inference. Under reasonable assumptions, we show that a modified form of RRLMs can get “very close” to identifying a subset of the Markov boundary by providing a worst-case bound on the space of possible solutions. The results hold for any convex loss, even when the underlying functional relationship is nonlinear, and the solution is not unique. Our approach combines ideas in Markov boundary and sufficient dimension reduction theory. Experimental results show that the modified RRLMs are competitive against state-of-the-art algorithms in discovering part of the Markov boundary from gene expression data. PMID:27170915
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
Zhou, De; Lin, Zhulu; Liu, Liming
2012-11-15
Land salinization and desalinization are complex processes affected by both biophysical and human-induced driving factors. Conventional approaches of land salinization assessment and simulation are either too time consuming or focus only on biophysical factors. The cellular automaton (CA)-Markov model, when coupled with spatial pattern analysis, is well suited for regional assessments and simulations of salt-affected landscapes since both biophysical and socioeconomic data can be efficiently incorporated into a geographic information system framework. Our hypothesis set forth that the CA-Markov model can serve as an alternative tool for regional assessment and simulation of land salinization or desalinization. Our results suggest that the CA-Markov model, when incorporating biophysical and human-induced factors, performs better than the model which did not account for these factors when simulating the salt-affected landscape of the Yinchuan Plain (China) in 2009. In general, the CA-Markov model is best suited for short-term simulations and the performance of the CA-Markov model is largely determined by the availability of high-quality, high-resolution socioeconomic data. The coupling of the CA-Markov model with spatial pattern analysis provides an improved understanding of spatial and temporal variations of salt-affected landscape changes and an option to test different soil management scenarios for salinity management.
Algorithms for Discovery of Multiple Markov Boundaries
Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.
2013-01-01
Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052
Relativized hierarchical decomposition of Markov decision processes.
Ravindran, B
2013-01-01
Reinforcement Learning (RL) is a popular paradigm for sequential decision making under uncertainty. A typical RL algorithm operates with only limited knowledge of the environment and with limited feedback on the quality of the decisions. To operate effectively in complex environments, learning agents require the ability to form useful abstractions, that is, the ability to selectively ignore irrelevant details. It is difficult to derive a single representation that is useful for a large problem setting. In this chapter, we describe a hierarchical RL framework that incorporates an algebraic framework for modeling task-specific abstraction. The basic notion that we will explore is that of a homomorphism of a Markov Decision Process (MDP). We mention various extensions of the basic MDP homomorphism framework in order to accommodate different commonly understood notions of abstraction, namely, aspects of selective attention. Parts of the work described in this chapter have been reported earlier in several papers (Narayanmurthy and Ravindran, 2007, 2008; Ravindran and Barto, 2002, 2003a,b; Ravindran et al., 2007).
Efficient Markov Network Structure Discovery Using Independence Tests
Bromberg, Facundo; Margaritis, Dimitris; Honavar, Vasant
2011-01-01
We present two algorithms for learning the structure of a Markov network from data: GSMN* and GSIMN. Both algorithms use statistical independence tests to infer the structure by successively constraining the set of structures consistent with the results of these tests. Until very recently, algorithms for structure learning were based on maximum likelihood estimation, which has been proved to be NP-hard for Markov networks due to the difficulty of estimating the parameters of the network, needed for the computation of the data likelihood. The independence-based approach does not require the computation of the likelihood, and thus both GSMN* and GSIMN can compute the structure efficiently (as shown in our experiments). GSMN* is an adaptation of the Grow-Shrink algorithm of Margaritis and Thrun for learning the structure of Bayesian networks. GSIMN extends GSMN* by additionally exploiting Pearl’s well-known properties of the conditional independence relation to infer novel independences from known ones, thus avoiding the performance of statistical tests to estimate them. To accomplish this efficiently GSIMN uses the Triangle theorem, also introduced in this work, which is a simplified version of the set of Markov axioms. Experimental comparisons on artificial and real-world data sets show GSIMN can yield significant savings with respect to GSMN*, while generating a Markov network with comparable or in some cases improved quality. We also compare GSIMN to a forward-chaining implementation, called GSIMN-FCH, that produces all possible conditional independences resulting from repeatedly applying Pearl’s theorems on the known conditional independence tests. The results of this comparison show that GSIMN, by the sole use of the Triangle theorem, is nearly optimal in terms of the set of independences tests that it infers. PMID:22822297
Hidden Markov Models: The Best Models for Forager Movements?
Joo, Rocio; Bertrand, Sophie; Tam, Jorge; Fablet, Ronan
2013-01-01
One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs). We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs). They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour), while their behavioural modes (fishing, searching and cruising) were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines) for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%), significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance. PMID:24058400
Mitotic cell recognition with hidden Markov models
NASA Astrophysics Data System (ADS)
Gallardo, Greg M.; Yang, Fuxing; Ianzini, Fiorenza; Mackey, Michael; Sonka, Milan
2004-05-01
This work describes a method for detecting mitotic cells in time-lapse microscopy images of live cells. The image sequences are from the Large Scale Digital Cell Analysis System (LSDCAS) at the University of Iowa. LSDCAS is an automated microscope system capable of monitoring 1000 microscope fields over time intervals of up to one month. Manual analysis of the image sequences can be extremely time consuming. This work is part of a larger project to automate the image sequence analysis. A three-step approach is used. In the first step, potential mitotic cells are located in the image sequences. In the second step, object border segmentation is performed with the watershed algorithm. Objects in adjacent frames are grouped into object sequences for classification. In the third step, the image sequences are converted to feature vector sequences. The feature vectors contain spatial and temporal information. Hidden Markov Models (HMMs) are used to classify the feature vector sequences into dead cells, cell edges, and dividing cells. Discrete and continuous HMMs were trained on 500 sequences. The discrete HMM recognition rates were 62% for dead cells, 77% for cell edges, and 75% for dividing cells. The continuous HMM results were 68%, 88% and 77%.
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
Second-order perturbation theory: a covariant approach involving a barotropic equation of state
NASA Astrophysics Data System (ADS)
Osano, Bob
2017-06-01
We present a covariant and gauge-invariant formalism suited to the study of second-order effects associated with higher order tensor perturbations. The analytical method we have developed enables us to characterize pure second-order tensor perturbations about the FLRW model having different kinds of equations of state. Our analysis of the radiation case suggests that it may be feasible to examine the CMB polarization arising from higher order perturbations.
ERIC Educational Resources Information Center
Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio
2016-01-01
We extend to the longitudinal setting a latent class approach that was recently introduced by Lanza, Coffman, and Xu to estimate the causal effect of a treatment. The proposed approach enables an evaluation of multiple treatment effects on subpopulations of individuals from a dynamic perspective, as it relies on a latent Markov (LM) model that is…
ERIC Educational Resources Information Center
Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio
2016-01-01
We extend to the longitudinal setting a latent class approach that was recently introduced by Lanza, Coffman, and Xu to estimate the causal effect of a treatment. The proposed approach enables an evaluation of multiple treatment effects on subpopulations of individuals from a dynamic perspective, as it relies on a latent Markov (LM) model that is…
NASA Astrophysics Data System (ADS)
Yang, Y.; Min, Y.; Jun, Y.
2012-12-01
structural components (e.g., Al-O-Si and Si-O-Si linkages) may serve as better base units than minerals (i.e., the pH dependence of Al-O-Si breakdown may be more useful than the pH dependence of Si release rate from any specific mineral). Second, Al/Si ordering is expected to show effects on the structure of interfacial layers formed during water-rock interactions, because from the mass-balance perspective, the interfacial layer is inherently related to dissolution incongruency due to the elemental reactivity differences. The fact that the incongruency is quantifiable using crystallographic parameters may suggest that the formation of the interfacial layer should at least be partially attributable to intrinsic non-stoichiometric dissolution (instead of secondary phase formation). Our approach provides a new means to connect atomic scale structural properties of a mineral to its macroscale dissolution behaviors.
Xue, Dingyü; Li, Tingxue
2017-04-27
The parameter optimization method for multivariable systems is extended to the controller design problems for multiple input multiple output (MIMO) square fractional-order plants. The algorithm can be applied to search for the optimal parameters of integer-order controllers for fractional-order plants with or without time delays. Two examples are given to present the controller design procedures for MIMO fractional-order systems. Simulation studies show that the integer-order controllers designed are robust to plant gain variations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Empirical Markov Chain Monte Carlo Bayesian analysis of fMRI data.
de Pasquale, F; Del Gratta, C; Romani, G L
2008-08-01
In this work an Empirical Markov Chain Monte Carlo Bayesian approach to analyse fMRI data is proposed. The Bayesian framework is appealing since complex models can be adopted in the analysis both for the image and noise model. Here, the noise autocorrelation is taken into account by adopting an AutoRegressive model of order one and a versatile non-linear model is assumed for the task-related activation. Model parameters include the noise variance and autocorrelation, activation amplitudes and the hemodynamic response function parameters. These are estimated at each voxel from samples of the Posterior Distribution. Prior information is included by means of a 4D spatio-temporal model for the interaction between neighbouring voxels in space and time. The results show that this model can provide smooth estimates from low SNR data while important spatial structures in the data can be preserved. A simulation study is presented in which the accuracy and bias of the estimates are addressed. Furthermore, some results on convergence diagnostic of the adopted algorithm are presented. To validate the proposed approach a comparison of the results with those from a standard GLM analysis, spatial filtering techniques and a Variational Bayes approach is provided. This comparison shows that our approach outperforms the classical analysis and is consistent with other Bayesian techniques. This is investigated further by means of the Bayes Factors and the analysis of the residuals. The proposed approach applied to Blocked Design and Event Related datasets produced reliable maps of activation.
Handling target obscuration through Markov chain observations
NASA Astrophysics Data System (ADS)
Kouritzin, Michael A.; Wu, Biao
2008-04-01
Target Obscuration, including foliage or building obscuration of ground targets and landscape or horizon obscuration of airborne targets, plagues many real world filtering problems. In particular, ground moving target identification Doppler radar, mounted on a surveillance aircraft or unattended airborne vehicle, is used to detect motion consistent with targets of interest. However, these targets try to obscure themselves (at least partially) by, for example, traveling along the edge of a forest or around buildings. This has the effect of creating random blockages in the Doppler radar image that move dynamically and somewhat randomly through this image. Herein, we address tracking problems with target obscuration by building memory into the observations, eschewing the usual corrupted, distorted partial measurement assumptions of filtering in favor of dynamic Markov chain assumptions. In particular, we assume the observations are a Markov chain whose transition probabilities depend upon the signal. The state of the observation Markov chain attempts to depict the current obscuration and the Markov chain dynamics are used to handle the evolution of the partially obscured radar image. Modifications of the classical filtering equations that allow observation memory (in the form of a Markov chain) are given. We use particle filters to estimate the position of the moving targets. Moreover, positive proof-of-concept simulations are included.
A definition of metastability for Markov processes with detailed balance
NASA Astrophysics Data System (ADS)
Leyvraz, F.; Larralde, H.; Sanders, D. P.
2006-03-01
A definition of metastable states applicable to arbitrary finite state Markov processes satisfying detailed balance is discussed. In particular, we identify a crucial condition that distinguishes genuine metastable states from other types of slowly decaying modes and which leads to properties similar to those postulated in the restricted ensemble approach [1]. The intuitive physical meaning of this condition is simply that the total equilibrium probability of finding the system in the metastable state is negligible. As a concrete application of our formalism we present preliminary results on a 2D kinetic Ising model.
Numerical solutions for patterns statistics on Markov chains.
Nuel, Gregory
2006-01-01
We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.
NASA Astrophysics Data System (ADS)
Yulmetyev, Renat; Demin, Sergey; Emelyanova, Natalya; Gafarov, Fail; Hänggi, Peter
2003-03-01
In this work we develop a new method of diagnosing the nervous system diseases and a new approach in studying human gait dynamics with the help of the theory of discrete non-Markov random processes (Phys. Rev. E 62 (5) (2000) 6178, Phys. Rev. E 64 (2001) 066132, Phys. Rev. E 65 (2002) 046107, Physica A 303 (2002) 427). The stratification of the phase clouds and the statistical non-Markov effects in the time series of the dynamics of human gait are considered. We carried out the comparative analysis of the data of four age groups of healthy people: children (from 3 to 10 year olds), teenagers (from 11 to 14 year olds), young people (from 21 up to 29 year olds), elderly persons (from 71 to 77 year olds) and Parkinson patients. The full data set are analyzed with the help of the phase portraits of the four dynamic variables, the power spectra of the initial time correlation function and the memory functions of junior orders, the three first points in the spectra of the statistical non-Markov parameter. The received results allow to define the predisposition of the probationers to deflections in the central nervous system caused by Parkinson's disease. We have found out distinct differences between the five submitted groups. On this basis we offer a new method of diagnostics and forecasting Parkinson's disease.
Highly ordered nanocomposites via a monomer self-assembly in situ condensation approach
Gin, D.L.; Fischer, W.M.; Gray, D.H.; Smith, R.C.
1998-12-15
A method for synthesizing composites with architectural control on the nanometer scale is described. A polymerizable lyotropic liquid-crystalline monomer is used to form an inverse hexagonal phase in the presence of a second polymer precursor solution. The monomer system acts as an organic template, providing the underlying matrix and order of the composite system. Polymerization of the template in the presence of an optional cross-linking agent with retention of the liquid-crystalline order is carried out followed by a second polymerization of the second polymer precursor within the channels of the polymer template to provide an ordered nanocomposite material. 13 figs.
Higher Order Modeling in Hybrid Approaches to the Computation of Electromagnetic Fields
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Fink, Patrick W.; Graglia, Roberto D.
2000-01-01
Higher order geometry representations and interpolatory basis functions for computational electromagnetics are reviewed. Two types of vector-valued basis functions are described: curl-conforming bases, used primarily in finite element solutions, and divergence-conforming bases used primarily in integral equation formulations. Both sets satisfy Nedelec constraints, which optimally reduce the number of degrees of freedom required for a given order. Results are presented illustrating the improved accuracy and convergence properties of higher order representations for hybrid integral equation and finite element methods.
Higher Order Modeling in Hybrid Approaches to the Computation of Electromagnetic Fields
NASA Technical Reports Server (NTRS)
Wilton, Donald R.; Fink, Patrick W.; Graglia, Roberto D.
2000-01-01
Higher order geometry representations and interpolatory basis functions for computational electromagnetics are reviewed. Two types of vector-valued basis functions are described: curl-conforming bases, used primarily in finite element solutions, and divergence-conforming bases used primarily in integral equation formulations. Both sets satisfy Nedelec constraints, which optimally reduce the number of degrees of freedom required for a given order. Results are presented illustrating the improved accuracy and convergence properties of higher order representations for hybrid integral equation and finite element methods.
Highly ordered nanocomposites via a monomer self-assembly in situ condensation approach
Gin, Douglas L.; Fischer, Walter M.; Gray, David H.; Smith, Ryan C.
1998-01-01
A method for synthesizing composites with architectural control on the nanometer scale is described. A polymerizable lyotropic liquid-crystalline monomer is used to form an inverse hexagonal phase in the presence of a second polymer precursor solution. The monomer system acts as an organic template, providing the underlying matrix and order of the composite system. Polymerization of the template in the presence of an optional cross-linking agent with retention of the liquid-crystalline order is carried out followed by a second polymerization of the second polymer precursor within the channels of the polymer template to provide an ordered nanocomposite material.
New General Approach for Normally Ordering Coordinate-Momentum Operator Functions
NASA Astrophysics Data System (ADS)
Xu, Shi-Min; Xu, Xing-Lei; Li, Hong-Qi; Fan, Hong-Yi
2016-12-01
By virtue of integration technique within ordered product of operators and Dirac's representation theory we find a new general formula for normally ordering coordinate-momentum operator functions, that is f(ghat {{Q}}+hhat {P})= :exp [style {g2+h2 over 4}style {{partial 2} over {partial (ghat {{Q}}+hhat {P})2}}]f(ghat {{Q}}+hhat {P}):, where hat {Q} and hat {P} are the coordinate operator and momentum operator respectively, the symbol :: denotes normal ordering. Using this formula we can derive a series of new relations about Hermite polynomial and Laguerre polynomial, as well as some new differential relations.
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
Policy Transfer via Markov Logic Networks
NASA Astrophysics Data System (ADS)
Torrey, Lisa; Shavlik, Jude
We propose using a statistical-relational model, the Markov Logic Network, for knowledge transfer in reinforcement learning. Our goal is to extract relational knowledge from a source task and use it to speed up learning in a related target task. We show that Markov Logic Networks are effective models for capturing both source-task Q-functions and source-task policies. We apply them via demonstration, which involves using them for decision making in an initial stage of the target task before continuing to learn. Through experiments in the RoboCup simulated-soccer domain, we show that transfer via Markov Logic Networks can significantly improve early performance in complex tasks, and that transferring policies is more effective than transferring Q-functions.
A novel approach toward fuzzy generalized bi-ideals in ordered semigroups.
Khan, Faiz Muhammad; Sarmin, Nor Haniza; Khan, Hidayat Ullah
2014-01-01
In several advanced fields like control engineering, computer science, fuzzy automata, finite state machine, and error correcting codes, the use of fuzzified algebraic structures especially ordered semigroups plays a central role. In this paper, we introduced a new and advanced generalization of fuzzy generalized bi-ideals of ordered semigroups. These new concepts are supported by suitable examples. These new notions are the generalizations of ordinary fuzzy generalized bi-ideals of ordered semigroups. Several fundamental theorems of ordered semigroups are investigated by the properties of these newly defined fuzzy generalized bi-ideals. Further, using level sets, ordinary fuzzy generalized bi-ideals are linked with these newly defined ideals which is the most significant part of this paper.
New approach for anti-normally and normally ordering bosonic-operator functions in quantum optics
NASA Astrophysics Data System (ADS)
Xu, Shi-Min; Zhang, Yun-Hai; Xu, Xing-Lei; Li, Hong-Qi; Wang, Ji-Suo
2016-12-01
In this paper, we provide a new kind of operator formula for anti-normally and normally ordering bosonic-operator functions in quantum optics, which can help us arrange a bosonic-operator function f(λQ̂ + νP̂) in its anti-normal and normal ordering conveniently. Furthermore, mutual transformation formulas between anti-normal ordering and normal ordering, which have good universality, are derived too. Based on these operator formulas, some new differential relations and some useful mathematical integral formulas are easily derived without really performing these integrations. Project supported by the Natural Science Foundation of Shandong Province, China (Grant No. ZR2015AM025) and the Natural Science Foundation of Heze University, China (Grant No. XY14PY02).
A Novel Approach toward Fuzzy Generalized Bi-Ideals in Ordered Semigroups
Khan, Faiz Muhammad; Sarmin, Nor Haniza; Khan, Hidayat Ullah
2014-01-01
In several advanced fields like control engineering, computer science, fuzzy automata, finite state machine, and error correcting codes, the use of fuzzified algebraic structures especially ordered semigroups plays a central role. In this paper, we introduced a new and advanced generalization of fuzzy generalized bi-ideals of ordered semigroups. These new concepts are supported by suitable examples. These new notions are the generalizations of ordinary fuzzy generalized bi-ideals of ordered semigroups. Several fundamental theorems of ordered semigroups are investigated by the properties of these newly defined fuzzy generalized bi-ideals. Further, using level sets, ordinary fuzzy generalized bi-ideals are linked with these newly defined ideals which is the most significant part of this paper. PMID:24883375
Acheson, Daniel J.; MacDonald, Maryellen C.
2010-01-01
Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domain-specific mechanisms for serial ordering in verbal WM are provided by the language production architecture, in which positional, lexical, and phonological similarity constraints are highly similar to those identified in the WM literature. These behavioral similarities are paralleled in computational modeling of serial ordering in both fields. The role of long-term learning in serial ordering performance is emphasized, in contrast to some models of verbal WM. Classic WM findings are discussed in terms of the language production architecture. The integration of principles from both fields illuminates the maintenance and ordering mechanisms for verbal information. PMID:19210053
On Markov Earth Mover’s Distance
Wei, Jie
2015-01-01
In statistics, pattern recognition and signal processing, it is of utmost importance to have an effective and efficient distance to measure the similarity between two distributions and sequences. In statistics this is referred to as goodness-of-fit problem. Two leading goodness of fit methods are chi-square and Kolmogorov–Smirnov distances. The strictly localized nature of these two measures hinders their practical utilities in patterns and signals where the sample size is usually small. In view of this problem Rubner and colleagues developed the earth mover’s distance (EMD) to allow for cross-bin moves in evaluating the distance between two patterns, which find a broad spectrum of applications. EMD-L1 was later proposed to reduce the time complexity of EMD from super-cubic by one order of magnitude by exploiting the special L1 metric. EMD-hat was developed to turn the global EMD to a localized one by discarding long-distance earth movements. In this work, we introduce a Markov EMD (MEMD) by treating the source and destination nodes absolutely symmetrically. In MEMD, like hat-EMD, the earth is only moved locally as dictated by the degree d of neighborhood system. Nodes that cannot be matched locally is handled by dummy source and destination nodes. By use of this localized network structure, a greedy algorithm that is linear to the degree d and number of nodes is then developed to evaluate the MEMD. Empirical studies on the use of MEMD on deterministic and statistical synthetic sequences and SIFT-based image retrieval suggested encouraging performances. PMID:25983362
NASA Astrophysics Data System (ADS)
Li, Yuan; Lv, Hui; Jiao, Dongxiu
2017-03-01
In this study, an adaptive neural network synchronization (NNS) approach, capable of guaranteeing prescribed performance (PP), is designed for non-identical fractional-order chaotic systems (FOCSs). For PP synchronization, we mean that the synchronization error converges to an arbitrary small region of the origin with convergence rate greater than some function given in advance. Neural networks are utilized to estimate unknown nonlinear functions in the closed-loop system. Based on the integer-order Lyapunov stability theorem, a fractional-order adaptive NNS controller is designed, and the PP can be guaranteed. Finally, simulation results are presented to confirm our results.
Closed-form solution for loop transfer recovery via reduced-order observers
NASA Technical Reports Server (NTRS)
Bacon, Barton J.
1989-01-01
A well-known property of the reduced-order observer is exploited to obtain the controller solution of the loop transfer recovery problem. In that problem, the controller is sought that generates some desired loop shape at the plant's input or output channels. Past approaches to this problem have typically yielded controllers generating loop shapes that only converge pointwise to the desired loop shape. In the proposed approach, however, the solution (at the input) is obtained directly when the plant's first Markov parameter is full rank. In the more general case when the plant's first Markov parameter is not full rank, the solution is obtained in an analogous manner by appending a special set of input and output signals to the original set. A dual form of the reduced-order observer is shown to yield the LTR solution at the output channel.
Closed-form solution for loop transfer recovery via reduced-order observers
NASA Technical Reports Server (NTRS)
Bacon, Barton J.
1989-01-01
A well-known property of the reduced-order observer is exploited to obtain the controller solution of the loop transfer recovery problem. In that problem, the controller is sought that generates some desired loop shape at the plant's input or output channels. Past approaches to this problem have typically yielded controllers generating loop shapes that only converge pointwise to the desired loop shape. In the proposed approach, however, the solution (at the input) is obtained directly when the plant's first Markov parameter is full rank. In the more general case when the plant's first Markov parameter is not full rank, the solution is obtained in an analogous manner by appending a special set of input and output signals to the original set. A dual form of the reduced-order observer is shown to yield the LTR solution at the output channel.
Parallel Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2007-06-01
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Entropy production fluctuations of finite Markov chains
NASA Astrophysics Data System (ADS)
Jiang, Da-Quan; Qian, Min; Zhang, Fu-Xi
2003-09-01
For almost every trajectory segment over a finite time span of a finite Markov chain with any given initial distribution, the logarithm of the ratio of its probability to that of its time-reversal converges exponentially to the entropy production rate of the Markov chain. The large deviation rate function has a symmetry of Gallavotti-Cohen type, which is called the fluctuation theorem. Moreover, similar symmetries also hold for the rate functions of the joint distributions of general observables and the logarithmic probability ratio.
Protein family classification using sparse Markov transducers.
Eskin, E; Grundy, W N; Singer, Y
2000-01-01
In this paper we present a method for classifying proteins into families using sparse Markov transducers (SMTs). Sparse Markov transducers, similar to probabilistic suffix trees, estimate a probability distribution conditioned on an input sequence. SMTs generalize probabilistic suffix trees by allowing for wild-cards in the conditioning sequences. Because substitutions of amino acids are common in protein families, incorporating wildcards into the model significantly improves classification performance. We present two models for building protein family classifiers using SMTs. We also present efficient data structures to improve the memory usage of the models. We evaluate SMTs by building protein family classifiers using the Pfam database and compare our results to previously published results.
Koay, Cheng Guan; Hurley, Samuel A.; Meyerand, M. Elizabeth
2011-01-01
Purpose: Diffusion MRI measurements are typically acquired sequentially with unit gradient directions that are distributed uniformly on the unit sphere. The ordering of the gradient directions has significant effect on the quality of dMRI-derived quantities. Even though several methods have been proposed to generate optimal orderings of gradient directions, these methods are not widely used in clinical studies because of the two major problems. The first problem is that the existing methods for generating highly uniform and antipodally symmetric gradient directions are inefficient. The second problem is that the existing methods for generating optimal orderings of gradient directions are also highly inefficient. In this work, the authors propose two extremely efficient and deterministic methods to solve these two problems. Methods: The method for generating nearly uniform point set on the unit sphere (with antipodal symmetry) is based upon the notion that the spacing between two consecutive points on the same latitude should be equal to the spacing between two consecutive latitudes. The method for generating optimal ordering of diffusion gradient directions is based on the idea that each subset of incremental sample size, which is derived from the prescribed and full set of gradient directions, must be as uniform as possible in terms of the modified electrostatic energy designed for antipodally symmetric point set. Results: The proposed method outperformed the state-of-the-art method in terms of computational efficiency by about six orders of magnitude. Conclusions: Two extremely efficient and deterministic methods have been developed for solving the problem of optimal ordering of diffusion gradient directions. The proposed strategy is also applicable to optimal view-ordering in three-dimensional radial MRI. PMID:21928652
ERIC Educational Resources Information Center
Tisdell, Christopher C.
2017-01-01
Knowing an equation has a unique solution is important from both a modelling and theoretical point of view. For over 70 years, the approach to learning and teaching "well posedness" of initial value problems (IVPs) for second- and higher-order ordinary differential equations has involved transforming the problem and its analysis to a…
A multilevel approach to the relationship between birth order and intelligence.
Wichman, Aaron L; Rodgers, Joseph Lee; MacCallum, Robert C
2006-01-01
Many studies show relationships between birth order and intelligence but use cross-sectional designs or manifest other threats to internal validity. Multilevel analyses with a control variable show that when these threats are removed, two major results emerge: (a) birth order has no significant influence on children's intelligence and (b) earlier reported birth order effects on intelligence are attributable to factors that vary between, not within, families. Analyses on 7- to 8 - and 13- to 14-year-old children from the National Longitudinal Survey of Youth support these conclusions. When hierarchical data structures, age variance of children, and within-family versus between-family variance sources are taken into account, previous research is seen in a new light.
Approaching magnetic ordering in graphene materials by FeCl3 intercalation.
Bointon, Thomas Hardisty; Khrapach, Ivan; Yakimova, Rositza; Shytov, Andrey V; Craciun, Monica F; Russo, Saverio
2014-01-01
We show the successful intercalation of large area (1 cm(2)) epitaxial few-layer graphene grown on 4H-SiC with FeCl3. Upon intercalation the resistivity of this system drops from an average value of ∼200 Ω/sq to ∼16 Ω/sq at room temperature. The magneto-conductance shows a weak localization feature with a temperature dependence typical of graphene Dirac fermions demonstrating the decoupling into parallel hole gases of each carbon layer composing the FeCl3 intercalated structure. The phase coherence length (∼1.2 μm at 280 mK) decreases rapidly only for temperatures higher than the 2D magnetic ordering in the intercalant layer while it tends to saturate for temperatures lower than the antiferromagnetic ordering between the planes of FeCl3 molecules providing the first evidence for magnetic ordering in the extreme two-dimensional limit of graphene.
A First and Second Order Moment Approach to Probabilistic Control Synthesis
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology based on the estimation of the first two order moments of the random variables and processes that describe the controlled response. Synthesis is performed by solving an multi-objective optimization problem where stability and performance requirements in time- and frequency domains are integrated. The use of the first two order moments allows for the efficient estimation of the cost function thus for a faster synthesis algorithm. While reliability requirements are taken into account by using bounds to failure probabilities, requirements related to undesirable variability are implemented by quantifying the concentration of the random outcome about a deterministic target. The Hammersley Sequence Sampling and the First- and Second-Moment- Second-Order approximations are used to estimate the moments, whose accuracy and associated computational complexity are compared numerically. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
Robust controller designs for second-order dynamic system: A virtual passive approach
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1990-01-01
A robust controller design is presented for second-order dynamic systems. The controller is model-independent and itself is a virtual second-order dynamic system. Conditions on actuator and sensor placements are identified for controller designs that guarantee overall closed-loop stability. The dynamic controller can be viewed as a virtual passive damping system that serves to stabilize the actual dynamic system. The control gains are interpreted as virtual mass, spring, and dashpot elements that play the same roles as actual physical elements in stability analysis. Position, velocity, and acceleration feedback are considered. Simple examples are provided to illustrate the physical meaning of this controller design.
Approach to first-order exact solutions of the Ablowitz-Ladik equation.
Ankiewicz, Adrian; Akhmediev, Nail; Lederer, Falk
2011-05-01
We derive exact solutions of the Ablowitz-Ladik (A-L) equation using a special ansatz that linearly relates the real and imaginary parts of the complex function. This ansatz allows us to derive a family of first-order solutions of the A-L equation with two independent parameters. This novel technique shows that every exact solution of the A-L equation has a direct analog among first-order solutions of the nonlinear Schrödinger equation (NLSE).
Markov property of Gaussian states of canonical commutation relation algebras
NASA Astrophysics Data System (ADS)
Petz, Dénes; Pitrik, József
2009-11-01
The Markov property of Gaussian states of canonical commutation relation algebras is studied. The detailed description is given by the representing block matrix. The proof is short and allows infinite dimension. The relation to classical Gaussian Markov triplets is also described. The minimizer of relative entropy with respect to a Gaussian Markov state has the Markov property. The appendix contains formulas for the relative entropy.
NASA Astrophysics Data System (ADS)
Wu, Xin; Mei, Lijie; Huang, Guoqing; Liu, Sanqiu
2015-01-01
In general, there are differences between Lagrangian and Hamiltonian approaches at the same post-Newtonian (PN) order in a coordinate system under a coordinate gauge. They are from truncation of higher-order PN terms. They do not affect qualitative and quantitative results of the two approaches for a weak gravitational system such as the Solar System. Nevertheless, they may make the two approaches have somewhat or completely different dynamical qualitative features of integrability and nonintegrability (or order and chaos) for a strong gravitational field. Even if the two approaches have the same qualitative features, they have different quantitative results when the distances among compact objects are appropriately small. For a relativistic circular restricted three-body problem with the 1PN contribution from the circular motion of the primaries, although the two 1PN Lagrangian and Hamiltonian approaches are nonintegrable, their dynamics are somewhat nonequivalent for a small quantity of separations between the primaries when the initial conditions and other parameters are given. Particularly for comparable mass compact binaries with two arbitrary spins and spin effects restricted to the leading-order spin-orbit interaction, as an important example of extremely strong gravitational fields, the 2PN Arnowitt-Deser-Misner Lagrangian formulation is always nonintegrable and can be chaotic under some appropriate conditions because its equivalent higher-order PN canonical Hamiltonian includes many spin-spin couplings resulting in the absence of a fifth integral in a ten-dimensional phase space and is not integrable. However, the 2PN Arnowitt-Deser-Misner Hamiltonian is integrable and nonchaotic due to the presence of five constants of motion in the ten-dimensional phase space.
Prediction of User's Web-Browsing Behavior: Application of Markov Model.
Awad, M A; Khalil, I
2012-08-01
Web prediction is a classification problem in which we attempt to predict the next set of Web pages that a user may visit based on the knowledge of the previously visited pages. Predicting user's behavior while serving the Internet can be applied effectively in various critical applications. Such application has traditional tradeoffs between modeling complexity and prediction accuracy. In this paper, we analyze and study Markov model and all- Kth Markov model in Web prediction. We propose a new modified Markov model to alleviate the issue of scalability in the number of paths. In addition, we present a new two-tier prediction framework that creates an example classifier EC, based on the training examples and the generated classifiers. We show that such framework can improve the prediction time without compromising prediction accuracy. We have used standard benchmark data sets to analyze, compare, and demonstrate the effectiveness of our techniques using variations of Markov models and association rule mining. Our experiments show the effectiveness of our modified Markov model in reducing the number of paths without compromising accuracy. Additionally, the results support our analysis conclusions that accuracy improves with higher orders of all- Kth model.
Does Higher-Order Thinking Impinge on Learner-Centric Digital Approach?
ERIC Educational Resources Information Center
Mathew, Bincy; Raja, B. William Dharma
2015-01-01
Humans are social beings and the social cognition focuses on how one form impressions of other people, interpret the meaning of other people's behaviour and how people's behaviour is affected by our attitudes. The school provides complex social situations and in order to thrive, students must possess social cognition, the process of thinking about…
The interacting gaps model: reconciling theoretical and numerical approaches to limit-order models
NASA Astrophysics Data System (ADS)
Muchnik, Lev; Slanina, Frantisek; Solomon, Sorin
2003-12-01
We consider the emergence of power-law tails in the returns distribution of limit-order driven markets. We explain a previously observed clash between the theoretical and numerical studies of such models. We introduce a solvable model that interpolates between the previous studies and agrees with each of them in the relevant limit.
Does Higher-Order Thinking Impinge on Learner-Centric Digital Approach?
ERIC Educational Resources Information Center
Mathew, Bincy; Raja, B. William Dharma
2015-01-01
Humans are social beings and the social cognition focuses on how one form impressions of other people, interpret the meaning of other people's behaviour and how people's behaviour is affected by our attitudes. The school provides complex social situations and in order to thrive, students must possess social cognition, the process of thinking about…
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
ERIC Educational Resources Information Center
DeSarbo, Wayne S.; Park, Joonwook; Scott, Crystal J.
2008-01-01
A cyclical conditional maximum likelihood estimation procedure is developed for the multidimensional unfolding of two- or three-way dominance data (e.g., preference, choice, consideration) measured on ordered successive category rating scales. The technical description of the proposed model and estimation procedure are discussed, as well as the…
ERIC Educational Resources Information Center
DeSarbo, Wayne S.; Park, Joonwook; Scott, Crystal J.
2008-01-01
A cyclical conditional maximum likelihood estimation procedure is developed for the multidimensional unfolding of two- or three-way dominance data (e.g., preference, choice, consideration) measured on ordered successive category rating scales. The technical description of the proposed model and estimation procedure are discussed, as well as the…
Strategic Competence as a Fourth-Order Factor Model: A Structural Equation Modeling Approach
ERIC Educational Resources Information Center
Phakiti, Aek
2008-01-01
This article reports on an empirical study that tests a fourth-order factor model of strategic competence through the use of structural equation modeling (SEM). The study examines the hierarchical relationship of strategic competence to (a) strategic knowledge of cognitive and metacognitive strategy use in general (i.e., trait) and (b) strategic…
NASA Astrophysics Data System (ADS)
Hasunuma, Takumi; Kaneko, Tatsuya; Miyakoshi, Shohei; Ohta, Yukinori
2016-07-01
The variational cluster approximation is used to study the ground-state properties and single-particle spectra of the three-component fermionic Hubbard model defined on the two-dimensional square lattice at half filling. First, we show that either a paired Mott state or color-selective Mott state is realized in the paramagnetic system, depending on the anisotropy in the interaction strengths, except around the SU(3) symmetric point, where a paramagnetic metallic state is maintained. Then, by introducing Weiss fields to observe spontaneous symmetry breakings, we show that either a color-density-wave state or color-selective antiferromagnetic state is realized depending on the interaction anisotropy and that the first-order phase transition between these two states occurs at the SU(3) point. We moreover show that these staggered orders originate from the gain in potential energy (or Slater mechanism) near the SU(3) point but originate from the gain in kinetic energy (or Mott mechanism) when the interaction anisotropy is strong. The staggered orders near the SU(3) point disappear when the next-nearest-neighbor hopping parameters are introduced, indicating that these orders are fragile, protected only by the Fermi surface nesting.
ERIC Educational Resources Information Center
Acheson, Daniel J.; MacDonald, Maryellen C.
2009-01-01
Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domain-specific mechanisms for serial ordering in verbal WM are provided by…
A bi-ordering approach to linking gene expression with clinical annotations in gastric cancer.
Shi, Fan; Leckie, Christopher; MacIntyre, Geoff; Haviv, Izhak; Boussioutas, Alex; Kowalczyk, Adam
2010-09-23
In the study of cancer genomics, gene expression microarrays, which measure thousands of genes in a single assay, provide abundant information for the investigation of interesting genes or biological pathways. However, in order to analyze the large number of noisy measurements in microarrays, effective and efficient bioinformatics techniques are needed to identify the associations between genes and relevant phenotypes. Moreover, systematic tests are needed to validate the statistical and biological significance of those discoveries. In this paper, we develop a robust and efficient method for exploratory analysis of microarray data, which produces a number of different orderings (rankings) of both genes and samples (reflecting correlation among those genes and samples). The core algorithm is closely related to biclustering, and so we first compare its performance with several existing biclustering algorithms on two real datasets - gastric cancer and lymphoma datasets. We then show on the gastric cancer data that the sample orderings generated by our method are highly statistically significant with respect to the histological classification of samples by using the Jonckheere trend test, while the gene modules are biologically significant with respect to biological processes (from the Gene Ontology). In particular, some of the gene modules associated with biclusters are closely linked to gastric cancer tumorigenesis reported in previous literature, while others are potentially novel discoveries. In conclusion, we have developed an effective and efficient method, Bi-Ordering Analysis, to detect informative patterns in gene expression microarrays by ranking genes and samples. In addition, a number of evaluation metrics were applied to assess both the statistical and biological significance of the resulting bi-orderings. The methodology was validated on gastric cancer and lymphoma datasets.
A bi-ordering approach to linking gene expression with clinical annotations in gastric cancer
2010-01-01
Background In the study of cancer genomics, gene expression microarrays, which measure thousands of genes in a single assay, provide abundant information for the investigation of interesting genes or biological pathways. However, in order to analyze the large number of noisy measurements in microarrays, effective and efficient bioinformatics techniques are needed to identify the associations between genes and relevant phenotypes. Moreover, systematic tests are needed to validate the statistical and biological significance of those discoveries. Results In this paper, we develop a robust and efficient method for exploratory analysis of microarray data, which produces a number of different orderings (rankings) of both genes and samples (reflecting correlation among those genes and samples). The core algorithm is closely related to biclustering, and so we first compare its performance with several existing biclustering algorithms on two real datasets - gastric cancer and lymphoma datasets. We then show on the gastric cancer data that the sample orderings generated by our method are highly statistically significant with respect to the histological classification of samples by using the Jonckheere trend test, while the gene modules are biologically significant with respect to biological processes (from the Gene Ontology). In particular, some of the gene modules associated with biclusters are closely linked to gastric cancer tumorigenesis reported in previous literature, while others are potentially novel discoveries. Conclusion In conclusion, we have developed an effective and efficient method, Bi-Ordering Analysis, to detect informative patterns in gene expression microarrays by ranking genes and samples. In addition, a number of evaluation metrics were applied to assess both the statistical and biological significance of the resulting bi-orderings. The methodology was validated on gastric cancer and lymphoma datasets. PMID:20860844
Estimation with Right-Censored Observations Under A Semi-Markov Model
Hu, X. Joan
2013-01-01
The semi-Markov process often provides a better framework than the classical Markov process for the analysis of events with multiple states. The purpose of this paper is twofold. First, we show that in the presence of right censoring, when the right end-point of the support of the censoring time is strictly less than the right end-point of the support of the semi-Markov kernel, the transition probability of the semi-Markov process is nonidentifiable, and the estimators proposed in the literature are inconsistent in general. We derive the set of all attainable values for the transition probability based on the censored data, and we propose a nonparametric inference procedure for the transition probability using this set. Second, the conventional approach to constructing confidence bands is not applicable for the semi-Markov kernel and the sojourn time distribution. We propose new perturbation resampling methods to construct these confidence bands. Different weights and transformations are explored in the construction. We use simulation to examine our proposals and illustrate them with hospitalization data from a recent cancer survivor study. PMID:23874060
MARKOV Model Application to Proliferation Risk Reduction of an Advanced Nuclear System
Bari,R.A.
2008-07-13
The Generation IV International Forum (GIF) emphasizes proliferation resistance and physical protection (PR&PP) as a main goal for future nuclear energy systems. The GIF PR&PP Working Group has developed a methodology for the evaluation of these systems. As an application of the methodology, Markov model has been developed for the evaluation of proliferation resistance and is demonstrated for a hypothetical Example Sodium Fast Reactor (ESFR) system. This paper presents the case of diversion by the facility owner/operator to obtain material that could be used in a nuclear weapon. The Markov model is applied to evaluate material diversion strategies. The following features of the Markov model are presented here: (1) An effective detection rate has been introduced to account for the implementation of multiple safeguards approaches at a given strategic point; (2) Technical failure to divert material is modeled as intrinsic barriers related to the design of the facility or the properties of the material in the facility; and (3) Concealment to defeat or degrade the performance of safeguards is recognized in the Markov model. Three proliferation risk measures are calculated directly by the Markov model: the detection probability, technical failure probability, and proliferation time. The material type is indicated by an index that is based on the quality of material diverted. Sensitivity cases have been done to demonstrate the effects of different modeling features on the measures of proliferation resistance.
2012-01-01
Initiative Network (SBInet) Supply Chain approach in the areas of lead times between repairs, spares inventory, and the identification of failure trends...The availability rate of the platform(s) needed to be improved because the current supply chain process enacted by the government did not work in a
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (SUN VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a
Obesity status transitions across the elementary years: Use of Markov chain modeling
USDA-ARS?s Scientific Manuscript database
Overweight and obesity status transition probabilities using first-order Markov transition models applied to elementary school children were assessed. Complete longitudinal data across eleven assessments were available from 1,494 elementary school children (from 7,599 students in 41 out of 45 school...
Nonparametric model validations for hidden Markov models with applications in financial econometrics
Zhao, Zhibiao
2011-01-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601
Dissipativity-Based Reliable Control for Fuzzy Markov Jump Systems With Actuator Faults.
Tao, Jie; Lu, Renquan; Shi, Peng; Su, Hongye; Wu, Zheng-Guang
2017-09-01
This paper is concerned with the problem of reliable dissipative control for Takagi-Sugeno fuzzy systems with Markov jumping parameters. Considering the influence of actuator faults, a sufficient condition is developed to ensure that the resultant closed-loop system is stochastically stable and strictly ( Q, S,R )-dissipative based on a relaxed approach in which mode-dependent and fuzzy-basis-dependent Lyapunov functions are employed. Then a reliable dissipative control for fuzzy Markov jump systems is designed, with sufficient condition proposed for the existence of guaranteed stability and dissipativity controller. The effectiveness and potential of the obtained design method is verified by two simulation examples.
Zhao, Zhibiao
2011-06-01
We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.
A robust hidden semi-Markov model with application to aCGH data processing.
Ding, Jiarui; Shah, Sohrab
2013-01-01
Hidden semi-Markov models are effective at modelling sequences with succession of homogenous zones by choosing appropriate state duration distributions. To compensate for model mis-specification and provide protection against outliers, we design a robust hidden semi-Markov model with Student's t mixture models as the emission distributions. The proposed approach is used to model array based comparative genomic hybridization data. Experiments conducted on the benchmark data from the Coriell cell lines, and glioblastoma multiforme data illustrate the reliability of the technique.
NASA Astrophysics Data System (ADS)
Saha, Suman; Das, Saptarshi; Das, Shantanu; Gupta, Amitava
2012-09-01
A novel conformal mapping based fractional order (FO) methodology is developed in this paper for tuning existing classical (Integer Order) Proportional Integral Derivative (PID) controllers especially for sluggish and oscillatory second order systems. The conventional pole placement tuning via Linear Quadratic Regulator (LQR) method is extended for open loop oscillatory systems as well. The locations of the open loop zeros of a fractional order PID (FOPID or PIλDμ) controller have been approximated in this paper vis-à-vis a LQR tuned conventional integer order PID controller, to achieve equivalent integer order PID control system. This approach eases the implementation of analog/digital realization of a FOPID controller with its integer order counterpart along with the advantages of fractional order controller preserved. It is shown here in the paper that decrease in the integro-differential operators of the FOPID/PIλDμ controller pushes the open loop zeros of the equivalent PID controller towards greater damping regions which gives a trajectory of the controller zeros and dominant closed loop poles. This trajectory is termed as "M-curve". This phenomena is used to design a two-stage tuning algorithm which reduces the existing PID controller's effort in a significant manner compared to that with a single stage LQR based pole placement method at a desired closed loop damping and frequency.
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.
Markov chains at the interface of combinatorics, computing, and statistical physics
NASA Astrophysics Data System (ADS)
Streib, Amanda Pascoe
The fields of statistical physics, discrete probability, combinatorics, and theoretical computer science have converged around efforts to understand random structures and algorithms. Recent activity in the interface of these fields has enabled tremendous breakthroughs in each domain and has supplied a new set of techniques for researchers approaching related problems. This thesis makes progress on several problems in this interface whose solutions all build on insights from multiple disciplinary perspectives. First, we consider a dynamic growth process arising in the context of DNA-based self-assembly. The assembly process can be modeled as a simple Markov chain. We prove that the chain is rapidly mixing for large enough bias in regions of Zd. The proof uses a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile, which arises in the nanotechnology application. Moreover, we use intuition from statistical physics to construct a choice of the biases for which the Markov chain Mmon requires exponential time to converge. Second, we consider a related problem regarding the convergence rate of biased permutations that arises in the context of self-organizing lists. The Markov chain Mnn in this case is a nearest-neighbor chain that allows adjacent transpositions, and the rate of these exchanges is governed by various input parameters. It was conjectured that the chain is always rapidly mixing when the inversion probabilities are positively biased, i.e., we put nearest neighbor pair x < y in order with bias 1/2 ≤ pxy ≤ 1 and out of order with bias 1 - pxy. The Markov chain Mmon was known to have connections to a simplified version of this biased card-shuffling. We provide new connections between Mnn and Mmon by using simple combinatorial bijections, and we prove that Mnn is
Markov Chain Estimation of Avian Seasonal Fecundity
To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...
Finite Markov Chains and Random Discrete Structures
1994-07-26
arrays with fixed margins 4. Persi Diaconis and Susan Holmes, Three Examples of Monte- Carlo Markov Chains: at the Interface between Statistical Computing...solutions for a math- ematical model of thermomechanical phase transitions in shape memory materials with Landau- Ginzburg free energy 1168 Angelo Favini
Semi-Markov Unreliability Range Evaluator (SURE)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1989-01-01
Analysis tool for reconfigurable, fault-tolerant systems, SURE provides efficient way to calculate accurate upper and lower bounds for death state probabilities for large class of semi-Markov models. Calculated bounds close enough for use in reliability studies of ultrareliable computer systems. Written in PASCAL for interactive execution and runs on DEC VAX computer under VMS.
Markov process analysis of atom probe data
NASA Astrophysics Data System (ADS)
Wang, Qi; T, J. Kinkus; Ren, Dagang
1990-08-01
A geometry model of field evaporation process is set up; with this model the field evaporation process can be described as Markov process. Its application to the earliest stage of phase transition is studied. For comparison, Camus' system Fe-Cr 45 at.% is calculated agin, and the same result is extracted from our method and intimated in our experimental data.
Bias in Markov models of disease.
Faissol, Daniel M; Griffin, Paul M; Swann, Julie L
2009-08-01
We examine bias in Markov models of diseases, including both chronic and infectious diseases. We consider two common types of Markov disease models: ones where disease progression changes by severity of disease, and ones where progression of disease changes in time or by age. We find sufficient conditions for bias to exist in models with aggregated transition probabilities when compared to models with state/time dependent transition probabilities. We also find that when aggregating data to compute transition probabilities, bias increases with the degree of data aggregation. We illustrate by examining bias in Markov models of Hepatitis C, Alzheimer's disease, and lung cancer using medical data and find that the bias is significant depending on the method used to aggregate the data. A key implication is that by not incorporating state/time dependent transition probabilities, studies that use Markov models of diseases may be significantly overestimating or underestimating disease progression. This could potentially result in incorrect recommendations from cost-effectiveness studies and incorrect disease burden forecasts.
Multiscale Representations of Markov Random Fields
1992-09-08
modeling a wide variety of biological, chelmical, electrical, mechanical and economic phenomena, [10]. Moreover, the Markov structure makes the models...Transactions on Informlation Theory, 18:232-240, March 1972. [65] J. WOODS AND C. RADEWAN, "Kalman Filtering in Two Dimensions," IEEE Trans- actions on
Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.
2006-07-01
Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)
Markov jump linear systems-based position estimation for lower limb exoskeletons.
Nogueira, Samuel L; Siqueira, Adriano A G; Inoue, Roberto S; Terra, Marco H
2014-01-22
In this paper, we deal with Markov Jump Linear Systems-based filtering applied to robotic rehabilitation. The angular positions of an impedance-controlled exoskeleton, designed to help stroke and spinal cord injured patients during walking rehabilitation, are estimated. Standard position estimate approaches adopt Kalman filters (KF) to improve the performance of inertial measurement units (IMUs) based on individual link configurations. Consequently, for a multi-body system, like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link position estimation (e.g., the foot). In this paper, we propose a collective modeling of all inertial sensors attached to the exoskeleton, combining them in a Markovian estimation model in order to get the best information from each sensor. In order to demonstrate the effectiveness of our approach, simulation results regarding a set of human footsteps, with four IMUs and three encoders attached to the lower limb exoskeleton, are presented. A comparative study between the Markovian estimation system and the standard one is performed considering a wide range of parametric uncertainties.
Markov Jump Linear Systems-Based Position Estimation for Lower Limb Exoskeletons
Nogueira, Samuel L.; Siqueira, Adriano A. G.; Inoue, Roberto S.; Terra, Marco H.
2014-01-01
In this paper, we deal with Markov Jump Linear Systems-based filtering applied to robotic rehabilitation. The angular positions of an impedance-controlled exoskeleton, designed to help stroke and spinal cord injured patients during walking rehabilitation, are estimated. Standard position estimate approaches adopt Kalman filters (KF) to improve the performance of inertial measurement units (IMUs) based on individual link configurations. Consequently, for a multi-body system, like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link position estimation (e.g., the foot). In this paper, we propose a collective modeling of all inertial sensors attached to the exoskeleton, combining them in a Markovian estimation model in order to get the best information from each sensor. In order to demonstrate the effectiveness of our approach, simulation results regarding a set of human footsteps, with four IMUs and three encoders attached to the lower limb exoskeleton, are presented. A comparative study between the Markovian estimation system and the standard one is performed considering a wide range of parametric uncertainties. PMID:24451469
Fight deck human-automation mode confusion detection using a generalized fuzzy hidden Markov model
NASA Astrophysics Data System (ADS)
Lyu, Hao Lyu
Due to the need for aviation safety, convenience, and efficiency, the autopilot has been introduced into the cockpit. The fast development of the autopilot has brought great benefits to the aviation industry. On the human side, the flight deck has been designed to be a complex, tightly-coupled, and spatially distributed system. The problem of dysfunctional interaction between the pilot and the automation (human-automation interaction issue) has become more and more visible. Thus, detection of a mismatch between the pilot's expectation and automation's behavior in a timely manner is required. In order to solve this challenging problem, separate modeling of the pilot and the automation is necessary. In this thesis, an intent-based framework is introduced to detect the human-automation interaction issue. Under this framework, the pilot's expectation of the aircraft is modeled by pilot intent while the behavior of the automation system is modeled by automation intent. The mode confusion is detected when the automation intent differs from the pilot intent. The pilot intent is inferred by comparing the target value set by the pilot with the aircraft's current state. Meanwhile, the automation intent is inferred through the Generalized Fuzzy Hidden Markov Model (GFHMM), which is an extension of the classical Hidden Markov Model. The stochastic characteristic of the ``hidden'' intents is considered by introducing fuzzy logic. Different from the previous approaches of inferring automation intent, GFHMM does not require a probabilistic model for certain flight modes as prior knowledge. The parameters of GFHMM (initial fuzzy density of the intent, fuzzy transmission density, and fuzzy emission density) are determined through the flight data by using a machine learning technique, the Fuzzy C-Means clustering algorithm (FCM). Lastly, both the pilot's and automation's intent inference algorithms and the mode confusion detection method are validated through flight data.
Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution
Djordjevic, Ivan B.
2015-01-01
Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually
Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution.
Djordjevic, Ivan B
2015-08-24
Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually
Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.
2006-01-01
The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…
Joseph Buongiorno
2001-01-01
Faustmann's formula gives the land value, or the forest value of land with trees, under deterministic assumptions regarding future stand growth and prices, over an infinite horizon. Markov decision process (MDP) models generalize Faustmann's approach by recognizing that future stand states and prices are known only as probabilistic distributions. The...
Birth order effects on the separation process in young adults: an evolutionary and dynamic approach.
Ziv, Ido; Hermel, Orly
2011-01-01
The present study analyzes the differential contribution of a familial or social focus in imaginative ideation (the personal fable and imagined audience mental constructs) to the separation-individuation process of firstborn, middleborn, and lastborn children. A total of 160 young adults were divided into 3 groups by birth order. Participants' separation-individuation process was evaluated by the Psychological Separation Inventory, and results were cross-validated by the Pathology of Separation-Individuation Inventory. The Imaginative Ideation Inventory tested the relative dominance of the familial and social environments in participants' mental constructs. The findings showed that middleborn children had attained more advanced separation and were lower in family-focused ideation and higher in nonfamilial social ideation. However, the familial and not the social ideation explained the variance in the separation process in all the groups. The findings offer new insights into the effects of birth order on separation and individuation in adolescents and young adults.
An overset mesh approach for 3D mixed element high-order discretizations
NASA Astrophysics Data System (ADS)
Brazell, Michael J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.
2016-10-01
A parallel high-order Discontinuous Galerkin (DG) method is used to solve the compressible Navier-Stokes equations in an overset mesh framework. The DG solver has many capabilities including: hp-adaption, curved cells, support for hybrid, mixed-element meshes, and moving meshes. Combining these capabilities with overset grids allows the DG solver to be used in problems with bodies in relative motion and in a near-body off-body solver strategy. The overset implementation is constructed to preserve the design accuracy of the baseline DG discretization. Multiple simulations are carried out to validate the accuracy and performance of the overset DG solver. These simulations demonstrate the capability of the high-order DG solver to handle complex geometry and large scale parallel simulations in an overset framework.
New approach to the first-order phase transition of Lennard-Jones fluids.
Muguruma, Chizuru; Okamoto, Yuko; Mikami, Masuhiro
2004-04-22
The multicanonical Monte Carlo method is applied to a bulk Lennard-Jones fluid system to investigate the liquid-solid phase transition. We take the example of a system of 108 argon particles. The multicanonical weight factor we determined turned out to be reliable for the energy range between -7.0 and -4.0 kJ/mol, which corresponds to the temperature range between 60 and 250 K. The expectation values of the thermodynamic quantities obtained from the multicanonical production run by the reweighting techniques exhibit the characteristics of first-order phase transitions between liquid and solid states around 150 K. The present study reveals that the multicanonical algorithm is particularly suitable for analyzing the transition state of the first-order phase transition in detail.
Universal order parameters and quantum phase transitions: a finite-size approach.
Shi, Qian-Qian; Zhou, Huan-Qiang; Batchelor, Murray T
2015-01-08
We propose a method to construct universal order parameters for quantum phase transitions in many-body lattice systems. The method exploits the H-orthogonality of a few near-degenerate lowest states of the Hamiltonian describing a given finite-size system, which makes it possible to perform finite-size scaling and take full advantage of currently available numerical algorithms. An explicit connection is established between the fidelity per site between two H-orthogonal states and the energy gap between the ground state and low-lying excited states in the finite-size system. The physical information encoded in this gap arising from finite-size fluctuations clarifies the origin of the universal order parameter. We demonstrate the procedure for the one-dimensional quantum formulation of the q-state Potts model, for q = 2, 3, 4 and 5, as prototypical examples, using finite-size data obtained from the density matrix renormalization group algorithm.
Multi-omics approach for estimating metabolic networks using low-order partial correlations.
Kayano, Mitsunori; Imoto, Seiya; Yamaguchi, Rui; Miyano, Satoru
2013-08-01
Two typical purposes of metabolome analysis are to estimate metabolic pathways and to understand the regulatory systems underlying the metabolism. A powerful source of information for these analyses is a set of multi-omics data for RNA, proteins, and metabolites. However, integrated methods that analyze multi-omics data simultaneously and unravel the systems behind metabolisms have not been well established. We developed a statistical method based on low-order partial correlations with a robust correlation coefficient for estimating metabolic networks from metabolome, proteome, and transcriptome data. Our method is defined by the maximum of low-order, particularly first-order, partial correlations (MF-PCor) in order to assign a correct edge with the highest correlation and to detect the factors that strongly affect the correlation coefficient. First, through numerical experiments with real and synthetic data, we showed that the use of protein and transcript data of enzymes improved the accuracy of the estimated metabolic networks in MF-PCor. In these experiments, the effectiveness of the proposed method was also demonstrated by comparison with a correlation network (Cor) and a Gaussian graphical model (GGM). Our theoretical investigation confirmed that the performance of MF-PCor could be superior to that of the competing methods. In addition, in the real data analysis, we investigated the role of metabolites, enzymes, and enzyme genes that were identified as important factors in the network established by MF-PCor. We then found that some of them corresponded to specific reactions between metabolites mediated by catalytic enzymes that were difficult to be identified by analysis based on metabolite data alone.
Make-to-order manufacturing - new approach to management of manufacturing processes
NASA Astrophysics Data System (ADS)
Saniuk, A.; Waszkowski, R.
2016-08-01
Strategic management must now be closely linked to the management at the operational level, because only in such a situation the company can be flexible and can quickly respond to emerging opportunities and pursue ever-changing strategic objectives. In these conditions industrial enterprises seek constantly new methods, tools and solutions which help to achieve competitive advantage. They are beginning to pay more attention to cost management, economic effectiveness and performance of business processes. In the article characteristics of make-to-order systems (MTO) and needs associated with managing such systems is identified based on the literature analysis. The main aim of this article is to present the results of research related to the development of a new solution dedicated to small and medium enterprises manufacture products solely on the basis of production orders (make-to- order systems). A set of indicators to enable continuous monitoring and control of key strategic areas this type of company is proposed. A presented solution includes the main assumptions of the following concepts: the Performance Management (PM), the Balanced Scorecard (BSC) and a combination of strategic management with the implementation of operational management. The main benefits of proposed solution are to increase effectiveness of MTO manufacturing company management.
Dynamic order reduction of thin-film deposition kinetics models: A reaction factorization approach
Adomaitis, Raymond A.
2016-01-15
A set of numerical tools for the analysis and dynamic dimension reduction of chemical vapor and atomic layer deposition (ALD) surface reaction models is developed in this work. The approach is based on a two-step process where in the first, the chemical species surface balance dynamic equations are factored to effectively decouple the (nonlinear) reaction rates, a process that eliminates redundant dynamic modes and that identifies conserved quantities. If successful, the second phase is implemented to factor out redundant dynamic modes when species relatively minor in concentration are omitted; if unsuccessful, the technique points to potential model structural problems. An alumina ALD process is used for an example consisting of 19 reactions and 23 surface and gas-phase species. Using the approach developed, the model is reduced by nineteen modes to a four-dimensional dynamic system without any knowledge of the reaction rate values. Results are interpreted in the context of potential model validation studies.
Searching for convergence in phylogenetic Markov chain Monte Carlo.
Beiko, Robert G; Keith, Jonathan M; Harlow, Timothy J; Ragan, Mark A
2006-08-01
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a "metachain" to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely.
Medical imaging feasibility in body fluids using Markov chains
NASA Astrophysics Data System (ADS)
Kavehrad, M.; Armstrong, A. D.
2017-02-01
A relatively wide field-of-view and high resolution imaging is necessary for navigating the scope within the body, inspecting tissue, diagnosing disease, and guiding surgical interventions. As the large number of modes available in the multimode fibers (MMF) provides higher resolution, MMFs could replace the millimeters-thick bundles of fibers and lenses currently used in endoscopes. However, attributes of body fluids and obscurants such as blood, impose perennial limitations on resolution and reliability of optical imaging inside human body. To design and evaluate optimum imaging techniques that operate under realistic body fluids conditions, a good understanding of the channel (medium) behavior is necessary. In most prior works, Monte-Carlo Ray Tracing (MCRT) algorithm has been used to analyze the channel behavior. This task is quite numerically intensive. The focus of this paper is on investigating the possibility of simplifying this task by a direct extraction of state transition matrices associated with standard Markov modeling from the MCRT computer simulations programs. We show that by tracing a photon's trajectory in the body fluids via a Markov chain model, the angular distribution can be calculated by simple matrix multiplications. We also demonstrate that the new approach produces result that are close to those obtained by MCRT and other known methods. Furthermore, considering the fact that angular, spatial, and temporal distributions of energy are inter-related, mixing time of Monte- Carlo Markov Chain (MCMC) for different types of liquid concentrations is calculated based on Eigen-analysis of the state transition matrix and possibility of imaging in scattering media are investigated. To this end, we have started to characterize the body fluids that reduce the resolution of imaging [1].
Hidden Markov models for evolution and comparative genomics analysis.
Bykova, Nadezda A; Favorov, Alexander V; Mironov, Andrey A
2013-01-01
The problem of reconstruction of ancestral states given a phylogeny and data from extant species arises in a wide range of biological studies. The continuous-time Markov model for the discrete states evolution is generally used for the reconstruction of ancestral states. We modify this model to account for a case when the states of the extant species are uncertain. This situation appears, for example, if the states for extant species are predicted by some program and thus are known only with some level of reliability; it is common for bioinformatics field. The main idea is formulation of the problem as a hidden Markov model on a tree (tree HMM, tHMM), where the basic continuous-time Markov model is expanded with the introduction of emission probabilities of observed data (e.g. prediction scores) for each underlying discrete state. Our tHMM decoding algorithm allows us to predict states at the ancestral nodes as well as to refine states at the leaves on the basis of quantitative comparative genomics. The test on the simulated data shows that the tHMM approach applied to the continuous variable reflecting the probabilities of the states (i.e. prediction score) appears to be more accurate then the reconstruction from the discrete states assignment defined by the best score threshold. We provide examples of applying our model to the evolutionary analysis of N-terminal signal peptides and transcription factor binding sites in bacteria. The program is freely available at http://bioinf.fbb.msu.ru/~nadya/tHMM and via web-service at http://bioinf.fbb.msu.ru/treehmmweb.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem
NASA Astrophysics Data System (ADS)
Costa, Oswaldo L. V.; Fragoso, Marcelo D.
2007-07-01
In this paper we devise a separation principle for the H2 optimal control problem of continuous-time Markov jump linear systems with partial observations and the Markov process taking values in an infinite countable set . We consider that only an output and the jump parameters are available to the controller. It is desired to design a dynamic Markov jump controller such that the closed loop system is stochastically stable and minimizes the H2-norm of the system. As in the case with no jumps, we show that an optimal controller can be obtained from two sets of infinite coupled algebraic Riccati equations, one associated with the optimal control problem when the state variable is available, and the other one associated with the optimal filtering problem. An important feature of our approach, not previously found in the literature, is to introduce an adjoint operator of the continuous-time Markov jump linear system to derive our results.
Fedorovich, E.
1995-09-01
The paper presents an extended theoretical background for applied modeling of the atmospheric convective boundary layer within the so-called zero-order jump approach, which implies vertical homogeneity of meteorological fields in the bulk of convective boundary layer (CBL) and zero-order discontinuities of variables at the interfaces of the layer. The zero-order jump model equations for the most typical cases of CBL are derived. The models of nonsteady, horizontally homogeneous CBL with and without shear, extensively studied in the past with the aid of zero-order jump models, are shown to be particular cases of the general zero-order jump theoretical framework. The integral budgets of momentum and heat are considered for different types of dry CBL. The profiles of vertical turbulent fluxes are presented and analyzed. The general version of the equation of CBL depth growth rate (entrainment rate equation) is obtained by the integration of the turbulence kinetic energy balance equation, invoking basic assumptions of the zero-order parameterizations of the CBL vertical structure. The problems of parameterizing the turbulence vertical structure and closure of the entrainment rate equation for specific cases of CBL are discussed. A parameterization scheme for the horizontal turbulent exchange in zero-order jump models of CBL is proposed. The developed theory is generalized for the case of CBL over irregular terrain. 28 refs., 2 figs.
A Kramers-Moyal Approach to the Analysis of Third-Order Noise with Applications in Option Valuation
Popescu, Dan M.; Lipan, Ovidiu
2015-01-01
We propose the use of the Kramers-Moyal expansion in the analysis of third-order noise. In particular, we show how the approach can be applied in the theoretical study of option valuation. Despite Pawula’s theorem, which states that a truncated model may exhibit poor statistical properties, we show that for a third-order Kramers-Moyal truncation model of an option’s and its underlier’s price, important properties emerge: (i) the option price can be written in a closed analytical form that involves the Airy function, (ii) the price is a positive function for positive skewness in the distribution, (iii) for negative skewness, the price becomes negative only for price values that are close to zero. Moreover, using third-order noise in option valuation reveals additional properties: (iv) the inconsistencies between two popular option pricing approaches (using a “delta-hedged” portfolio and using an option replicating portfolio) that are otherwise equivalent up to the second moment, (v) the ability to develop a measure R of how accurately an option can be replicated by a mixture of the underlying stocks and cash, (vi) further limitations of second-order models revealed by introducing third-order noise. PMID:25625856
A Type-Theoretic Approach to Higher-Order Modules with Sharing
1993-10-01
be easily restricted to " second -class" modules found in ML-like languages. 7? .:.!. . . - . . . U MInC QUALITY IS ZTJ3 r L a!1.,j1 Oris t HI aSe idl 1...If run-time selection is not used, modules behave exactly as they would in a more familiar "’ second -class’ module system such as is found in SML...on Girard’s F, [14] in much the same way that many systems are based on the second - order lambda calculus (F ). That is to say, our system can be
Dynamic Context-Aware Event Recognition Based on Markov Logic Networks
Liu, Fagui; Deng, Dacheng; Li, Ping
2017-01-01
Event recognition in smart spaces is an important and challenging task. Most existing approaches for event recognition purely employ either logical methods that do not handle uncertainty, or probabilistic methods that can hardly manage the representation of structured information. To overcome these limitations, especially in the situation where the uncertainty of sensing data is dynamically changing over the time, we propose a multi-level information fusion model for sensing data and contextual information, and also present a corresponding method to handle uncertainty for event recognition based on Markov logic networks (MLNs) which combine the expressivity of first order logic (FOL) and the uncertainty disposal of probabilistic graphical models (PGMs). Then we put forward an algorithm for updating formula weights in MLNs to deal with data dynamics. Experiments on two datasets from different scenarios are conducted to evaluate the proposed approach. The results show that our approach (i) provides an effective way to recognize events by using the fusion of uncertain data and contextual information based on MLNs and (ii) outperforms the original MLNs-based method in dealing with dynamic data. PMID:28257113
Hidden Markov modeling for single channel kinetics with filtering and correlated noise.
Qin, F; Auerbach, A; Sachs, F
2000-01-01
Hidden Markov modeling (HMM) can be applied to extract single channel kinetics at signal-to-noise ratios that are too low for conventional analysis. There are two general HMM approaches: traditional Baum's reestimation and direct optimization. The optimization approach has the advantage that it optimizes the rate constants directly. This allows setting constraints on the rate constants, fitting multiple data sets across different experimental conditions, and handling nonstationary channels where the starting probability of the channel depends on the unknown kinetics. We present here an extension of this approach that addresses the additional issues of low-pass filtering and correlated noise. The filtering is modeled using a finite impulse response (FIR) filter applied to the underlying signal, and the noise correlation is accounted for using an autoregressive (AR) process. In addition to correlated background noise, the algorithm allows for excess open channel noise that can be white or correlated. To maximize the efficiency of the algorithm, we derive the analytical derivatives of the likelihood function with respect to all unknown model parameters. The search of the likelihood space is performed using a variable metric method. Extension of the algorithm to data containing multiple channels is described. Examples are presented that demonstrate the applicability and effectiveness of the algorithm. Practical issues such as the selection of appropriate noise AR orders are also discussed through examples. PMID:11023898
A first-order time-domain Green's function approach to supersonic unsteady flow
NASA Technical Reports Server (NTRS)
Freedman, M. I.; Tseng, K.
1985-01-01
A time-domain Green's Function Method for unsteady supersonic potential flow around complex aircraft configurations is presented. The focus is on the supersonic range wherein the linear potential flow assumption is valid. The Green's function method is employed in order to convert the potential-flow differential equation into an integral one. This integral equation is then discretized, in space through standard finite-element technique, and in time through finite-difference, to yield a linear algebraic system of equations relating the unknown potential to its prescribed co-normalwash (boundary condition) on the surface of the aircraft. The arbitrary complex aircraft configuration is discretized into hyperboloidal (twisted quadrilateral) panels. The potential and co-normalwash are assumed to vary linearly within each panel. Consistent with the spatial linear (first-order) finite-element approximations, the potential and co-normalwash are assumed to vary linearly in time. The long range goal of our research is to develop a comprehensive theory for unsteady supersonic potential aerodynamics which is capable of yielding accurate results even in the low supersonic (i.e., high transonic) range.
NASA Astrophysics Data System (ADS)
Pokam Nguewawe, Chancelor; Fewo, Serge I.; Yemélé, David
2017-01-01
The effects of higher-order (HO) terms on the properties of the compact bright (CB) pulse described by the dispersionless nonlocal nonlinear Schrödinger (DNNLS) equation are investigated. These effects include third-order dispersion (TOD), the Raman term, and the time derivative of the pulse envelope. By means of the collective variable method, the dynamical behavior of the pulse amplitude, width, frequency, velocity, phase, and chirp during propagation is pointed out. The results indicate that the CB pulse experiences a self-frequency shift and self-steepening, respectively, in the presence of an isolated Raman term and the time derivative of the pulse envelope and acquires a velocity as the result of the TOD effect. In addition, TOD may also induce the breathing mode inside the variation of the pulse parameters when the width of the input pulse is slightly less than that of the unperturbed CB pulse. The combination of these terms, indispensable for describing ultrashort pulses, reproduces all these phenomena in the CB pulse behavior. Further, other properties are observed, namely, the pulse decay, the breathing mode even when the unperturbed CB pulse is taken as the input signal, and the attenuated pulse. These results are in good agreement with the results of the direct numerical simulations of the DNNLS equation with HO terms.
Combinatorial approach to generalized Bell and Stirling numbers and boson normal ordering problem
Mendez, M.A.; Blasiak, P.; Penson, K.A.
2005-08-01
We consider the numbers arising in the problem of normal ordering of expressions in boson creation a{sup {dagger}} and annihilation a operators ([a,a{sup {dagger}}]=1). We treat a general form of a boson string (a{sup {dagger}}){sup r{sub n}}a{sup s{sub n}}...(a{sup {dagger}}){sup r{sub 2}}a{sup s{sub 2}}(a{sup {dagger}}){sup r{sub 1}}a{sup s{sub 1}} which is shown to be associated with generalizations of Stirling and Bell numbers. The recurrence relations and closed-form expressions (Dobinski-type formulas) are obtained for these quantities by both algebraic and combinatorial methods. By extensive use of methods of combinatorial analysis we prove the equivalence of the aforementioned problem to the enumeration of special families of graphs. This link provides a combinatorial interpretation of the numbers arising in this normal ordering problem.
Constraint-preserving boundary conditions in the 3+1 first-order approach
Bona, C.; Bona-Casas, C.
2010-09-15
A set of energy-momentum constraint-preserving boundary conditions is proposed for the first-order Z4 case. The stability of a simple numerical implementation is tested in the linear regime (robust stability test), both with the standard corner and vertex treatment and with a modified finite-differences stencil for boundary points which avoids corners and vertices even in Cartesian-like grids. Moreover, the proposed boundary conditions are tested in a strong-field scenario, the Gowdy waves metric, showing the expected rate of convergence. The accumulated amount of energy-momentum constraint violations is similar or even smaller than the one generated by either periodic or reflection conditions, which are exact in the Gowdy waves case. As a side theoretical result, a new symmetrizer is explicitly given, which extends the parametric domain of symmetric hyperbolicity for the Z4 formalism. The application of these results to first-order Baumgarte-Shapiro-Shibata-Nakamura-like formalisms is also considered.
LDA+DMFT approach to ordering phenomena and the structural stability of correlated materials
NASA Astrophysics Data System (ADS)
Kuneš, J.; Leonov, I.; Augustinský, P.; Křápek, V.; Kollar, M.; Vollhardt, D.
2017-07-01
Materials with correlated electrons often respond very strongly to external or internal influences, leading to instabilities and states of matter with broken symmetry. This behavior can be studied theoretically either by evaluating the linear response characteristics, or by simulating the ordered phases of the materials under investigation. We developed the necessary tools within the dynamical mean-field theory (DMFT) to search for electronic instabilities in materials close to spin-state crossovers and to analyze the properties of the corresponding ordered states. This investigation, motivated by the physics of LaCoO3, led to a discovery of condensation of spinful excitons in the two-orbital Hubbard model with a surprisingly rich phase diagram. The results are reviewed in the first part of the article. Electronic correlations can also be the driving force behind structural transformations of materials. To be able to investigate correlation-induced phase instabilities we developed and implemented a formalism for the computation of total energies and forces within a fully charge self-consistent combination of density functional theory and DMFT. Applications of this scheme to the study of structural instabilities of selected correlated electron materials such as Fe and FeSe are reviewed in the second part of the paper.
Promotion of higher order of cognition in undergraduate medical students using case-based approach.
Dubey, Suparna; Dubey, Ashok Kumar
2017-01-01
The curriculum of pathology is conventionally "taught" in a series of didactic lectures, which promotes learning by rote. In this study, case-based learning (CBL) was introduced to assess its effect on higher order cognition and problem-solving skills in undergraduate medical students. The prescribed syllabus of hepatobiliary system was delivered to the undergraduate medical students of the fourth semester by conventional didactic lectures. A pretest, which contained questions designed to test both analysis and recall, was administered, followed by CBL sessions, in the presence of a facilitator, encouraging active discussion among students. Students were then assessed using a similar posttest. The perceptions of the students and the faculty were gathered by means of feedback questionnaires. The scores obtained by the students in the pre- and post-test were compared by paired t-test. Eighty-one students participated in CBL sessions, with 95.06% expressing a desire for more such sessions, preferably in all the topics. The faculty members also felt that CBL would be beneficial for the students but opined that it should be restricted to some topics. CBL was found to cause a highly significant (P < 0.0001) improvement in the students' higher levels of cognition, whereas the lower orders of cognition remained unaffected (P = 0.2048). CBL promotes active learning and helps in the development of critical thinking and analysis in undergraduate medical students. Although it is resource-intensive, an attempt should be made to incorporate it along with lectures in clinically important topics.
Second-order corrections to neutrino two-flavor oscillation parameters in the wave packet approach
NASA Astrophysics Data System (ADS)
Bernardini, A. E.; Guzzo, M. M.; Torres, F. R.
2006-11-01
We report about an analytic study involving the intermediate wave packet formalism for quantifying the physically relevant information which appears in the neutrino two-flavor conversion formula and helping us to obtain more precise limits and ranges for neutrino flavor oscillation. By following the sequence of analytic approximations where we assume a strictly peaked momentum distribution and consider the second-order corrections in a power series expansion of the energy, we point out a residual time-dependent phase which, coupled with the spreading/slippage effects, can subtly modify the neutrino-oscillation parameters and limits. Such second-order effects are usually ignored in the relativistic wave packet treatment, but they present an evident dependence on the propagation regime so that some small modifications to the oscillation pattern, even in the ultra-relativistic limit, can be quantified. These modifications are implemented in the confrontation with the neutrino-oscillation parameter range (mass-squared difference Δm2 and the mixing angle θ) where we assume the same wave packet parameters previously noticed in the literature in a kind of toy model for some reactor experiments. Generically speaking, our analysis parallels the recent experimental purposes which are concerned with higher precision parameter measurements. To summarize, we show that the effectiveness of a more accurate determination of Δm2 and θ depends on the wave packet width a and on the averaged propagating energy flux E¯ which still correspond to open variables for some classes of experiments.
Constraint-preserving boundary conditions in the 3+1 first-order approach
NASA Astrophysics Data System (ADS)
Bona, C.; Bona-Casas, C.
2010-09-01
A set of energy-momentum constraint-preserving boundary conditions is proposed for the first-order Z4 case. The stability of a simple numerical implementation is tested in the linear regime (robust stability test), both with the standard corner and vertex treatment and with a modified finite-differences stencil for boundary points which avoids corners and vertices even in Cartesian-like grids. Moreover, the proposed boundary conditions are tested in a strong-field scenario, the Gowdy waves metric, showing the expected rate of convergence. The accumulated amount of energy-momentum constraint violations is similar or even smaller than the one generated by either periodic or reflection conditions, which are exact in the Gowdy waves case. As a side theoretical result, a new symmetrizer is explicitly given, which extends the parametric domain of symmetric hyperbolicity for the Z4 formalism. The application of these results to first-order Baumgarte-Shapiro-Shibata-Nakamura-like formalisms is also considered.
NASA Technical Reports Server (NTRS)
Pasha, M. A.; Dazzo, J. J.; Silverthorn, J. T.
1982-01-01
An investigation of approach and landing longitudinal flying qualities, based on data generated using a variable stability NT-33 aircraft combined with significant control system dynamics is described. An optimum pilot lead time for pitch tracking, flight path angle tracking, and combined pitch and flight path angle tracking tasks is determined from a closed loop simulation using integral squared error (ISE) as a performance measure. Pilot gain and lead time were varied in the closed loop simulation of the pilot and aircraft to obtain the best performance for different control system configurations. The results lead to the selection of an optimum lead time using ISE as a performance criterion. Using this value of optimum lead time, a correlation is then found between pilot rating and performance with changes in the control system and in the aircraft dynamics. It is also shown that pilot rating is closely related to pilot workload which, in turn, is related to the amount of lead which the pilot must generate to obtain satisfactory response. The results also indicate that the pilot may use pitch angle tracking for the approach task and then add flight path angle tracking for the flare and touchdown.
The algebra of the general Markov model on phylogenetic trees and networks.
Sumner, J G; Holland, B R; Jarvis, P D
2012-04-01
It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.
General approach for studying first-order phase transitions at low temperatures.
Fiore, C E; da Luz, M G E
2011-12-02
By combining different ideas, a general and efficient protocol to deal with discontinuous phase transitions at low temperatures is proposed. For small T's, it is possible to derive a generic analytic expression for appropriate order parameters, whose coefficients are obtained from simple simulations. Once in such regimes simulations by standard algorithms are not reliable; an enhanced tempering method, the parallel tempering-accurate for small and intermediate system sizes with rather low computational cost-is used. Finally, from finite size analysis, one can obtain the thermodynamic limit. The procedure is illustrated for four distinct models, demonstrating its power, e.g., to locate coexistence lines and the phase density at the coexistence. © 2011 American Physical Society
A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.
Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen
2015-01-01
Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method.
Measuring the Edwards-Anderson order parameter of the Bose glass: A quantum gas microscope approach
NASA Astrophysics Data System (ADS)
Thomson, S. J.; Walker, L. S.; Harte, T. L.; Bruce, G. D.
2016-11-01
With the advent of spatially resolved fluorescence imaging in quantum gas microscopes, it is now possible to directly image glassy phases and probe the local effects of disorder in a highly controllable setup. Here we present numerical calculations using a spatially resolved local mean-field theory, show that it captures the essential physics of the disordered system, and use it to simulate the density distributions seen in single-shot fluorescence microscopy. From these simulated images we extract local properties of the phases which are measurable by a quantum gas microscope and show that unambiguous detection of the Bose glass is possible. In particular, we show that experimental determination of the Edwards-Anderson order parameter is possible in a strongly correlated quantum system using existing experiments. We also suggest modifications to the experiments which will allow further properties of the Bose glass to be measured.
Alves-Foss, J.; Levitt, K.
1991-01-01
In this paper we present a generalization of McCullough's restrictiveness model as the basis for proving security properties about distributed system designs. We mechanize this generalization and an event-based model of computer systems in the HOL (Higher Order Logic) system to prove the composability of the model and several other properties about the model. We then develop a set of generalized classes of system components and show for which families of user views they satisfied the model. Using these classes we develop a collection of general system components that are instantiations of one of these classes and show that the instantiations also satisfied the security property. We then conclude with a sample distributed secure system, based on the Rushby and Randell distributed system design and designed using our collection of components, and show how our mechanized verification system can be used to verify such designs. 16 refs., 20 figs.
Promotion of higher order of cognition in undergraduate medical students using case-based approach
Dubey, Suparna; Dubey, Ashok Kumar
2017-01-01
BACKGROUND: The curriculum of pathology is conventionally “taught” in a series of didactic lectures, which promotes learning by rote. In this study, case-based learning (CBL) was introduced to assess its effect on higher order cognition and problem-solving skills in undergraduate medical students. SUBJECTS AND METHODS: The prescribed syllabus of hepatobiliary system was delivered to the undergraduate medical students of the fourth semester by conventional didactic lectures. A pretest, which contained questions designed to test both analysis and recall, was administered, followed by CBL sessions, in the presence of a facilitator, encouraging active discussion among students. Students were then assessed using a similar posttest. The perceptions of the students and the faculty were gathered by means of feedback questionnaires. The scores obtained by the students in the pre- and post-test were compared by paired t-test. RESULTS: Eighty-one students participated in CBL sessions, with 95.06% expressing a desire for more such sessions, preferably in all the topics. The faculty members also felt that CBL would be beneficial for the students but opined that it should be restricted to some topics. CBL was found to cause a highly significant (P < 0.0001) improvement in the students’ higher levels of cognition, whereas the lower orders of cognition remained unaffected (P = 0.2048). CONCLUSIONS: CBL promotes active learning and helps in the development of critical thinking and analysis in undergraduate medical students. Although it is resource-intensive, an attempt should be made to incorporate it along with lectures in clinically important topics. PMID:28852665
Learning a Markov Logic network for supervised gene regulatory network inference.
Brouard, Céline; Vrain, Christel; Dubois, Julie; Castel, David; Debily, Marie-Anne; d'Alché-Buc, Florence
2013-09-12
Gene regulatory network inference remains a challenging problem in systems biology despite the numerous approaches that have been proposed. When substantial knowledge on a gene regulatory network is already available, supervised network inference is appropriate. Such a method builds a binary classifier able to assign a class (Regulation/No regulation) to an ordered pair of genes. Once learnt, the pairwise classifier can be used to predict new regulations. In this work, we explore the framework of Markov Logic Networks (MLN) that combine features of probabilistic graphical models with the expressivity of first-order logic rules. We propose to learn a Markov Logic network, e.g. a set of weighted rules that conclude on the predicate "regulates", starting from a known gene regulatory network involved in the switch proliferation/differentiation of keratinocyte cells, a set of experimental transcriptomic data and various descriptions of genes all encoded into first-order logic. As training data are unbalanced, we use asymmetric bagging to learn a set of MLNs. The prediction of a new regulation can then be obtained by averaging predictions of individual MLNs. As a side contribution, we propose three in silico tests to assess the performance of any pairwise classifier in various network inference tasks on real datasets. A first test consists of measuring the average performance on balanced edge prediction problem; a second one deals with the ability of the classifier, once enhanced by asymmetric bagging, to update a given network. Finally our main result concerns a third test that measures the ability of the method to predict regulations with a new set of genes. As expected, MLN, when provided with only numerical discretized gene expression data, does not perform as well as a pairwise SVM in terms of AUPR. However, when a more complete description of gene properties is provided by heterogeneous sources, MLN achieves the same performance as a black-box model such as a
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.
1979-01-01
The widespread modal analysis of flexible spacecraft and recognition of the poor a priori parameterization possible of the modal descriptions of individual structures have prompted the consideration of adaptive modal control strategies for distributed parameter systems. The current major approaches to computationally efficient adaptive digital control useful in these endeavors are explained in an original, lucid manner using modal second order structure dynamics for algorithm explication. Difficulties in extending these lumped-parameter techniques to distributed-parameter system expansion control are cited.
Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks
NASA Astrophysics Data System (ADS)
Johnson, Joseph
2016-03-01
We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called ``metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a ``supernet'' of all numerical information supporting new initiatives in AI.
Applying diffusion-based Markov chain Monte Carlo
Paul, Rajib; Berliner, L. Mark
2017-01-01
We examine the performance of a strategy for Markov chain Monte Carlo (MCMC) developed by simulating a discrete approximation to a stochastic differential equation (SDE). We refer to the approach as diffusion MCMC. A variety of motivations for the approach are reviewed in the context of Bayesian analysis. In particular, implementation of diffusion MCMC is very simple to set-up, even in the presence of nonlinear models and non-conjugate priors. Also, it requires comparatively little problem-specific tuning. We implement the algorithm and assess its performance for both a test case and a glaciological application. Our results demonstrate that in some settings, diffusion MCMC is a faster alternative to a general Metropolis-Hastings algorithm. PMID:28301529
Modeling Driver Behavior near Intersections in Hidden Markov Model
Li, Juan; He, Qinglian; Zhou, Hang; Guan, Yunlin; Dai, Wei
2016-01-01
Intersections are one of the major locations where safety is a big concern to drivers. Inappropriate driver behaviors in response to frequent changes when approaching intersections often lead to intersection-related crashes or collisions. Thus to better understand driver behaviors at intersections, especially in the dilemma zone, a Hidden Markov Model (HMM) is utilized in this study. With the discrete data processing, the observed dynamic data of vehicles are used for the inference of the Hidden Markov Model. The Baum-Welch (B-W) estimation algorithm is applied to calculate the vehicle state transition probability matrix and the observation probability matrix. When combined with the Forward algorithm, the most likely state of the driver can be obtained. Thus the model can be used to measure the stability and risk of driver behavior. It is found that drivers’ behaviors in the dilemma zone are of lower stability and higher risk compared with those in other regions around intersections. In addition to the B-W estimation algorithm, the Viterbi Algorithm is utilized to predict the potential dangers of vehicles. The results can be applied to driving assistance systems to warn drivers to avoid possible accidents. PMID:28009838
Saliency region detection based on Markov absorption probabilities.
Sun, Jingang; Lu, Huchuan; Liu, Xiuping
2015-05-01
In this paper, we present a novel bottom-up salient object detection approach by exploiting the relationship between the saliency detection and the Markov absorption probability. First, we calculate a preliminary saliency map by the Markov absorption probability on a weighted graph via partial image borders as background prior. Unlike most of the existing background prior-based methods which treated all image boundaries as background, we only use the left and top sides as background for simplicity. The saliency of each element is defined as the sum of the corresponding absorption probability by several left and top virtual boundary nodes, which are most similar to it. Second, a better result is obtained by ranking the relevance of the image elements with foreground cues extracted from the preliminary saliency map, which can effectively emphasize the objects against the background, whose computation is processed similarly as that in the first stage and yet substantially different from the former one. At last, three optimization techniques--content-based diffusion mechanism, superpixelwise depression function, and guided filter--are utilized to further modify the saliency map generalized at the second stage, which is proved to be effective and complementary to each other. Both qualitative and quantitative evaluations on four publicly available benchmark data sets demonstrate the robustness and efficiency of the proposed method against 17 state-of-the-art methods.
Modeling Driver Behavior near Intersections in Hidden Markov Model.
Li, Juan; He, Qinglian; Zhou, Hang; Guan, Yunlin; Dai, Wei
2016-12-21
Intersections are one of the major locations where safety is a big concern to drivers. Inappropriate driver behaviors in response to frequent changes when approaching intersections often lead to intersection-related crashes or collisions. Thus to better understand driver behaviors at intersections, especially in the dilemma zone, a Hidden Markov Model (HMM) is utilized in this study. With the discrete data processing, the observed dynamic data of vehicles are used for the inference of the Hidden Markov Model. The Baum-Welch (B-W) estimation algorithm is applied to calculate the vehicle state transition probability matrix and the observation probability matrix. When combined with the Forward algorithm, the most likely state of the driver can be obtained. Thus the model can be used to measure the stability and risk of driver behavior. It is found that drivers' behaviors in the dilemma zone are of lower stability and higher risk compared with those in other regions around intersections. In addition to the B-W estimation algorithm, the Viterbi Algorithm is utilized to predict the potential dangers of vehicles. The results can be applied to driving assistance systems to warn drivers to avoid possible accidents.
Hidden Markov latent variable models with multivariate longitudinal data.
Song, Xinyuan; Xia, Yemao; Zhu, Hongtu
2017-03-01
Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use.
Markov models of molecular kinetics: generation and validation.
Prinz, Jan-Hendrik; Wu, Hao; Sarich, Marco; Keller, Bettina; Senne, Martin; Held, Martin; Chodera, John D; Schütte, Christof; Noé, Frank
2011-05-07
Markov state models of molecular kinetics (MSMs), in which the long-time statistical dynamics of a molecule is approximated by a Markov chain on a discrete partition of configuration space, have seen widespread use in recent years. This approach has many appealing characteristics compared to straightforward molecular dynamics simulation and analysis, including the potential to mitigate the sampling problem by extracting long-time kinetic information from short trajectories and the ability to straightforwardly calculate expectation values and statistical uncertainties of various stationary and dynamical molecular observables. In this paper, we summarize the current state of the art in generation and validation of MSMs and give some important new results. We describe an upper bound for the approximation error made by modeling molecular dynamics with a MSM and we show that this error can be made arbitrarily small with surprisingly little effort. In contrast to previous practice, it becomes clear that the best MSM is not obtained by the most metastable discretization, but the MSM can be much improved if non-metastable states are introduced near the transition states. Moreover, we show that it is not necessary to resolve all slow processes by the state space partitioning, but individual dynamical processes of interest can be resolved separately. We also present an efficient estimator for reversible transition matrices and a robust test to validate that a MSM reproduces the kinetics of the molecular dynamics data.
Abdelbary, Ahmed; El-Gazayerly, Omaima N; El-Gendy, Nashwa A; Ali, Adel A
2010-09-01
Trimetazidine dihydrochloride is an effective anti-anginal agent; however, it is freely soluble in water and suffers from a relatively short half-life. To solve this encumbrance, it is a prospective candidate for fabricating trimetazidine extended-release formulations. Trimetazidine extended-release floating tablets were prepared using different hydrophilic matrix forming polymers including HPMC 4000 cps, carbopol 971P, polycarbophil, and guar gum. The tablets were fabricated by dry coating technique. In vitro evaluation of the prepared tablets was performed by the determination of the hardness, friability, content uniformity, and weight variation. The floating lag time and floating duration were also evaluated. Release profile of the prepared tablets was performed and analyzed. Furthermore, a stability study of the floating tablets was carried out at three different temperatures over 12 weeks. Finally, in vivo bioavailability study was done on human volunteers. All tablet formulas achieved < 0.5 min of floating lag time, more than 12 h of floating duration, and extended t (1/2). The drug release in all formulas followed zero-order kinetics. T4 and T8 tablets contained the least polymer concentration and complied with the dissolution requirements for controlled-release dosage forms. These two formulas were selected for further stability studies. T8 exhibited longer expiration date and was chosen for in vivo studies. T8 floating tablets showed an improvement in the drug bioavailability compared to immediate-release tablets (Vastrel® 20 mg).
A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences
Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen
2015-01-01
Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method. PMID:26161131