Science.gov

Sample records for markov ordering approach

  1. Exact significance test for Markov order

    NASA Astrophysics Data System (ADS)

    Pethel, S. D.; Hahs, D. W.

    2014-02-01

    We describe an exact significance test of the null hypothesis that a Markov chain is nth order. The procedure utilizes surrogate data to yield an exact test statistic distribution valid for any sample size. Surrogate data are generated using a novel algorithm that guarantees, per shot, a uniform sampling from the set of sequences that exactly match the nth order properties of the observed data. Using the test, the Markov order of Tel Aviv rainfall data is examined.

  2. Test to determine the Markov order of a time series.

    PubMed

    Racca, E; Laio, F; Poggi, D; Ridolfi, L

    2007-01-01

    The Markov order of a time series is an important measure of the "memory" of a process, and its knowledge is fundamental for the correct simulation of the characteristics of the process. For this reason, several techniques have been proposed in the past for its estimation. However, most of this methods are rather complex, and often can be applied only in the case of Markov chains. Here we propose a simple and robust test to evaluate the Markov order of a time series. Only the first-order moment of the conditional probability density function characterizing the process is used to evaluate the memory of the process itself. This measure is called the "expected value Markov (EVM) order." We show that there is good agreement between the EVM order and the known Markov order of some synthetic time series.

  3. Building Higher-Order Markov Chain Models with EXCEL

    ERIC Educational Resources Information Center

    Ching, Wai-Ki; Fung, Eric S.; Ng, Michael K.

    2004-01-01

    Categorical data sequences occur in many applications such as forecasting, data mining and bioinformatics. In this note, we present higher-order Markov chain models for modelling categorical data sequences with an efficient algorithm for solving the model parameters. The algorithm can be implemented easily in a Microsoft EXCEL worksheet. We give a…

  4. State space orderings for Gauss-Seidel in Markov chains revisited

    SciTech Connect

    Dayar, T.

    1996-12-31

    Symmetric state space orderings of a Markov chain may be used to reduce the magnitude of the subdominant eigenvalue of the (Gauss-Seidel) iteration matrix. Orderings that maximize the elemental mass or the number of nonzero elements in the dominant term of the Gauss-Seidel splitting (that is, the term approximating the coefficient matrix) do not necessarily converge faster. An ordering of a Markov chain that satisfies Property-R is semi-convergent. On the other hand, there are semi-convergent symmetric state space orderings that do not satisfy Property-R. For a given ordering, a simple approach for checking Property-R is shown. An algorithm that orders the states of a Markov chain so as to increase the likelihood of satisfying Property-R is presented. The computational complexity of the ordering algorithm is less than that of a single Gauss-Seidel iteration (for sparse matrices). In doing all this, the aim is to gain an insight for faster converging orderings. Results from a variety of applications improve the confidence in the algorithm.

  5. Kinetics and thermodynamics of first-order Markov chain copolymerization

    NASA Astrophysics Data System (ADS)

    Gaspard, P.; Andrieux, D.

    2014-07-01

    We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detachment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer.

  6. First and second order semi-Markov chains for wind speed modeling

    NASA Astrophysics Data System (ADS)

    Prattico, F.; Petroni, F.; D'Amico, G.

    2012-04-01

    Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. Semi-Markov processes (SMP) are a wide class of stochastic processes which generalize at the same time both Markov chains and renewal processes. Their main advantage is that of using whatever type of waiting time distribution for modeling the time to have a transition from one state to another one. This major flexibility has a price to pay: availability of data to estimate the parameters of the model which are more numerous. Data availability is not an issue in wind speed studies, therefore, semi-Markov models can be used in a statistical efficient way. In this work we present three different semi-Markov chain models: the first one is a first-order SMP where the transition probabilities from two speed states (at time Tn and Tn-1) depend on the initial state (the state at Tn-1), final state (the state at Tn) and on the waiting time (given by t=Tn-Tn-1), the second model is a second order SMP where we consider the transition probabilities as depending also on the state the wind speed was before the initial state (which is the state at Tn-2) and the last one is still a second order SMP where the transition probabilities depends on the three states at Tn-2,Tn-1 and Tn and on the waiting times t_1=Tn-1-Tn-2 and t_2=Tn-Tn-1. The three models are used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling

  7. Post processing with first- and second-order hidden Markov models

    NASA Astrophysics Data System (ADS)

    Taghva, Kazem; Poudel, Srijana; Malreddy, Spandana

    2013-01-01

    In this paper, we present the implementation and evaluation of first order and second order Hidden Markov Models to identify and correct OCR errors in the post processing of books. Our experiments show that the first order model approximately corrects 10% of the errors with 100% precision, while the second order model corrects a higher percentage of errors with much lower precision.

  8. Post processing of optically recognized text via second order hidden Markov model

    NASA Astrophysics Data System (ADS)

    Poudel, Srijana

    In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.

  9. Deciding when to intervene: a Markov decision process approach.

    PubMed

    Magni, P; Quaglini, S; Marchetti, M; Barosi, G

    2000-12-01

    The aim of this paper is to point out the difference between static and dynamic approaches to choosing the optimal time for intervention. The paper demonstrates that classical approaches, such as decision trees and influence diagrams, hardly cope with dynamic problems: they cannot simulate all the real-world strategies and consequently can only calculate suboptimal solutions. A dynamic formalism based on Markov decision processes (MPPs) is then proposed and applied to a medical problem: the prophylactic surgery in mild hereditary spherocytosis. The paper compares the proposed approach with a static approach on the same medical problem. The policy provided by the dynamic approach achieved significant gain over the static policy by delaying the intervention time in some categories of patients. The calculations are carried out with DT-Planner, a graphical decision aid specifically built for dealing with dynamic decision processes.

  10. Heisenberg picture approach to the stability of quantum Markov systems

    SciTech Connect

    Pan, Yu E-mail: zibo.miao@anu.edu.au; Miao, Zibo E-mail: zibo.miao@anu.edu.au; Amini, Hadis; Gough, John; Ugrinovskii, Valery; James, Matthew R.

    2014-06-15

    Quantum Markovian systems, modeled as unitary dilations in the quantum stochastic calculus of Hudson and Parthasarathy, have become standard in current quantum technological applications. This paper investigates the stability theory of such systems. Lyapunov-type conditions in the Heisenberg picture are derived in order to stabilize the evolution of system operators as well as the underlying dynamics of the quantum states. In particular, using the quantum Markov semigroup associated with this quantum stochastic differential equation, we derive sufficient conditions for the existence and stability of a unique and faithful invariant quantum state. Furthermore, this paper proves the quantum invariance principle, which extends the LaSalle invariance principle to quantum systems in the Heisenberg picture. These results are formulated in terms of algebraic constraints suitable for engineering quantum systems that are used in coherent feedback networks.

  11. Detecting memory and structure in human navigation patterns using Markov chain models of varying order.

    PubMed

    Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus

    2014-01-01

    One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work.

  12. Statistical identification with hidden Markov models of large order splitting strategies in an equity market

    NASA Astrophysics Data System (ADS)

    Vaglica, Gabriella; Lillo, Fabrizio; Mantegna, Rosario N.

    2010-07-01

    Large trades in a financial market are usually split into smaller parts and traded incrementally over extended periods of time. We address these large trades as hidden orders. In order to identify and characterize hidden orders, we fit hidden Markov models to the time series of the sign of the tick-by-tick inventory variation of market members of the Spanish Stock Exchange. Our methodology probabilistically detects trading sequences, which are characterized by a significant majority of buy or sell transactions. We interpret these patches of sequential buying or selling transactions as proxies of the traded hidden orders. We find that the time, volume and number of transaction size distributions of these patches are fat tailed. Long patches are characterized by a large fraction of market orders and a low participation rate, while short patches have a large fraction of limit orders and a high participation rate. We observe the existence of a buy-sell asymmetry in the number, average length, average fraction of market orders and average participation rate of the detected patches. The detected asymmetry is clearly dependent on the local market trend. We also compare the hidden Markov model patches with those obtained with the segmentation method used in Vaglica et al (2008 Phys. Rev. E 77 036110), and we conclude that the former ones can be interpreted as a partition of the latter ones.

  13. A Bayesian Hidden Markov Model-based approach for anomaly detection in electronic systems

    NASA Astrophysics Data System (ADS)

    Dorj, E.; Chen, C.; Pecht, M.

    Early detection of anomalies in any system or component prevents impending failures and enhances performance and availability. The complex architecture of electronics, the interdependency of component functionalities, and the miniaturization of most electronic systems make it difficult to detect and analyze anomalous behaviors. A Hidden Markov Model-based classification technique determines unobservable hidden behaviors of complex and remotely inaccessible electronic systems using observable signals. This paper presents a data-driven approach for anomaly detection in electronic systems based on a Bayesian Hidden Markov Model classification technique. The posterior parameters of the Hidden Markov Models are estimated using the conjugate prior method. An application of the developed Bayesian Hidden Markov Model-based anomaly detection approach is presented for detecting anomalous behavior in Insulated Gate Bipolar Transistors using experimental data. The detection results illustrate that the developed anomaly detection approach can help detect anomalous behaviors in electronic systems, which can help prevent system downtime and catastrophic failures.

  14. Modeling anomalous radar propagation using first-order two-state Markov chains

    NASA Astrophysics Data System (ADS)

    Haddad, B.; Adane, A.; Mesnard, F.; Sauvageot, H.

    In this paper, it is shown that radar echoes due to anomalous propagations (AP) can be modeled using Markov chains. For this purpose, images obtained in southwestern France by means of an S-band meteorological radar recorded every 5 min in 1996 were considered. The daily mean surfaces of AP appearing in these images are sorted into two states and their variations are then represented by a binary random variable. The Markov transition matrix, the 1-day-lag autocorrelation coefficient as well as the long-term probability of having each of both states are calculated on a monthly basis. The same kind of modeling was also applied to the rainfall observed in the radar dataset under study. The first-order two-state Markov chains are then found to fit the daily variations of either AP or rainfall areas very well. For each month of the year, the surfaces filled by both types of echo follow similar stochastic distributions, but their autocorrelation coefficient is different. Hence, it is suggested that this coefficient is a discriminant factor which could be used, among other criteria, to improve the identification of AP in radar images.

  15. A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes

    NASA Technical Reports Server (NTRS)

    Carpenter, Russell; Lee, Taesul

    2008-01-01

    Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.

  16. A Hidden Markov Approach to Modeling Interevent Earthquake Times

    NASA Astrophysics Data System (ADS)

    Chambers, D.; Ebel, J. E.; Kafka, A. L.; Baglivo, J.

    2003-12-01

    A hidden Markov process, in which the interevent time distribution is a mixture of exponential distributions with different rates, is explored as a model for seismicity that does not follow a Poisson process. In a general hidden Markov model, one assumes that a system can be in any of a finite number k of states and there is a random variable of interest whose distribution depends on the state in which the system resides. The system moves probabilistically among the states according to a Markov chain; that is, given the history of visited states up to the present, the conditional probability that the next state is a specified one depends only on the present state. Thus the transition probabilities are specified by a k by k stochastic matrix. Furthermore, it is assumed that the actual states are unobserved (hidden) and that only the values of the random variable are seen. From these values, one wishes to estimate the sequence of states, the transition probability matrix, and any parameters used in the state-specific distributions. The hidden Markov process was applied to a data set of 110 interevent times for earthquakes in New England from 1975 to 2000. Using the Baum-Welch method (Baum et al., Ann. Math. Statist. 41, 164-171), we estimate the transition probabilities, find the most likely sequence of states, and estimate the k means of the exponential distributions. Using k=2 states, we found the data were fit well by a mixture of two exponential distributions, with means of approximately 5 days and 95 days. The steady state model indicates that after approximately one fourth of the earthquakes, the waiting time until the next event had the first exponential distribution and three fourths of the time it had the second. Three and four state models were also fit to the data; the data were inconsistent with a three state model but were well fit by a four state model.

  17. A Fuzzy Markov approach for assessing groundwater pollution potential for landfill siting.

    PubMed

    Chen, Wei-Yea; Kao, Jehng-Jung

    2002-04-01

    This study presents a Fuzzy Markov groundwater pollution potential assessment approach to facilitate landfill siting analysis. Landfill siting is constrained by various regulations and is complicated by the uncertainty of groundwater related factors. The conventional static rating method cannot properly depict the potential impact of pollution on a groundwater table because the groundwater table level fluctuates. A Markov chain model is a dynamic model that can be viewed as a hybrid of probability and matrix models. The probability matrix of the Markov chain model is determined based on the groundwater table elevation time series. The probability reflects the likelihood of the groundwater table changing between levels. A fuzzy set method is applied to estimate the degree of pollution potential, and a case study demonstrates the applicability of the proposed approach. The short- and long-term pollution potential information provided by the proposed approach is expected to enhance landfill siting decisions.

  18. A second-order Markov process for modeling diffusive motion through spatial discretization.

    PubMed

    Sant, Marco; Papadopoulos, George K; Theodorou, Doros N

    2008-01-14

    A new "mesoscopic" stochastic model has been developed to describe the diffusive behavior of a system of particles at equilibrium. The model is based on discretizing space into slabs by drawing equispaced parallel planes along a coordinate direction. A central role is played by the probability that a particle exits a slab via the face opposite to the one through which it entered (transmission probability), as opposed to exiting via the same face through which it entered (reflection probability). A simple second-order Markov process invoking this probability is developed, leading to an expression for the self-diffusivity, applicable for large slab widths, consistent with a continuous formulation of diffusional motion. This model is validated via molecular dynamics simulations in a bulk system of soft spheres across a wide range of densities.

  19. A Markov random field approach for microstructure synthesis

    NASA Astrophysics Data System (ADS)

    Kumar, A.; Nguyen, L.; DeGraef, M.; Sundararaghavan, V.

    2016-03-01

    We test the notion that many microstructures have an underlying stationary probability distribution. The stationary probability distribution is ubiquitous: we know that different windows taken from a polycrystalline microstructure are generally ‘statistically similar’. To enable computation of such a probability distribution, microstructures are represented in the form of undirected probabilistic graphs called Markov Random Fields (MRFs). In the model, pixels take up integer or vector states and interact with multiple neighbors over a window. Using this lattice structure, algorithms are developed to sample the conditional probability density for the state of each pixel given the known states of its neighboring pixels. The sampling is performed using reference experimental images. 2D microstructures are artificially synthesized using the sampled probabilities. Statistical features such as grain size distribution and autocorrelation functions closely match with those of the experimental images. The mechanical properties of the synthesized microstructures were computed using the finite element method and were also found to match the experimental values.

  20. Regime Switching Modeling of Substance Use: Time-Varying and Second-Order Markov Models and Individual Probability Plots

    PubMed Central

    Neale, Michael C.; Clark, Shaunna L.; Dolan, Conor V.; Hunter, Michael D.

    2015-01-01

    A linear latent growth curve mixture model with regime switching is extended in 2 ways. Previously, the matrix of first-order Markov switching probabilities was specified to be time-invariant, regardless of the pair of occasions being considered. The first extension, time-varying transitions, specifies different Markov transition matrices between each pair of occasions. The second extension is second-order time-invariant Markov transition probabilities, such that the probability of switching depends on the states at the 2 previous occasions. The models are implemented using the R package OpenMx, which facilitates data handling, parallel computation, and further model development. It also enables the extraction and display of relative likelihoods for every individual in the sample. The models are illustrated with previously published data on alcohol use observed on 4 occasions as part of the National Longitudinal Survey of Youth, and demonstrate improved fit to the data. PMID:26924921

  1. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  2. Compound extremes in a changing climate - a Markov chain approach

    NASA Astrophysics Data System (ADS)

    Sedlmeier, Katrin; Mieruch, Sebastian; Schädler, Gerd; Kottmeier, Christoph

    2016-11-01

    Studies using climate models and observed trends indicate that extreme weather has changed and may continue to change in the future. The potential impact of extreme events such as heat waves or droughts depends not only on their number of occurrences but also on "how these extremes occur", i.e., the interplay and succession of the events. These quantities are quite unexplored, for past changes as well as for future changes and call for sophisticated methods of analysis. To address this issue, we use Markov chains for the analysis of the dynamics and succession of multivariate or compound extreme events. We apply the method to observational data (1951-2010) and an ensemble of regional climate simulations for central Europe (1971-2000, 2021-2050) for two types of compound extremes, heavy precipitation and cold in winter and hot and dry days in summer. We identify three regions in Europe, which turned out to be likely susceptible to a future change in the succession of heavy precipitation and cold in winter, including a region in southwestern France, northern Germany and in Russia around Moscow. A change in the succession of hot and dry days in summer can be expected for regions in Spain and Bulgaria. The susceptibility to a dynamic change of hot and dry extremes in the Russian region will probably decrease.

  3. Markov-chain approach to the distribution of ancestors in species of biparental reproduction

    NASA Astrophysics Data System (ADS)

    Caruso, M.; Jarne, C.

    2014-08-01

    We studied how to obtain a distribution for the number of ancestors in species of sexual reproduction. Present models concentrate on the estimation of distributions repetitions of ancestors in genealogical trees. It has been shown that it is not possible to reconstruct the genealogical history of each species along all its generations by means of a geometric progression. This analysis demonstrates that it is possible to rebuild the tree of progenitors by modeling the problem with a Markov chain. For each generation, the maximum number of possible ancestors is different. This presents huge problems for the resolution. We found a solution through a dilation of the sample space, although the distribution defined there takes smaller values with respect to the initial problem. In order to correct the distribution for each generation, we introduced the invariance under a gauge (local) group of dilations. These ideas can be used to study the interaction of several processes and provide a new approach on the problem of the common ancestor. In the same direction, this model also provides some elements that can be used to improve models of animal reproduction.

  4. A Markov Chain Approach to Probabilistic Swarm Guidance

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Bayard, David S.

    2012-01-01

    This paper introduces a probabilistic guidance approach for the coordination of swarms of autonomous agents. The main idea is to drive the swarm to a prescribed density distribution in a prescribed region of the configuration space. In its simplest form, the probabilistic approach is completely decentralized and does not require communication or collabo- ration between agents. Agents make statistically independent probabilistic decisions based solely on their own state, that ultimately guides the swarm to the desired density distribution in the configuration space. In addition to being completely decentralized, the probabilistic guidance approach has a novel autonomous self-repair property: Once the desired swarm density distribution is attained, the agents automatically repair any damage to the distribution without collaborating and without any knowledge about the damage.

  5. A Graph-Algorithmic Approach for the Study of Metastability in Markov Chains

    NASA Astrophysics Data System (ADS)

    Gan, Tingyue; Cameron, Maria

    2017-01-01

    Large continuous-time Markov chains with exponentially small transition rates arise in modeling complex systems in physics, chemistry, and biology. We propose a constructive graph-algorithmic approach to determine the sequence of critical timescales at which the qualitative behavior of a given Markov chain changes, and give an effective description of the dynamics on each of them. This approach is valid for both time-reversible and time-irreversible Markov processes, with or without symmetry. Central to this approach are two graph algorithms, Algorithm 1 and Algorithm 2, for obtaining the sequences of the critical timescales and the hierarchies of Typical Transition Graphs or T-graphs indicating the most likely transitions in the system without and with symmetry, respectively. The sequence of critical timescales includes the subsequence of the reciprocals of the real parts of eigenvalues. Under a certain assumption, we prove sharp asymptotic estimates for eigenvalues (including pre-factors) and show how one can extract them from the output of Algorithm 1. We discuss the relationship between Algorithms 1 and 2 and explain how one needs to interpret the output of Algorithm 1 if it is applied in the case with symmetry instead of Algorithm 2. Finally, we analyze an example motivated by R. D. Astumian's model of the dynamics of kinesin, a molecular motor, by means of Algorithm 2.

  6. Medical Inpatient Journey Modeling and Clustering: A Bayesian Hidden Markov Model Based Approach

    PubMed Central

    Huang, Zhengxing; Dong, Wei; Wang, Fei; Duan, Huilong

    2015-01-01

    Modeling and clustering medical inpatient journeys is useful to healthcare organizations for a number of reasons including inpatient journey reorganization in a more convenient way for understanding and browsing, etc. In this study, we present a probabilistic model-based approach to model and cluster medical inpatient journeys. Specifically, we exploit a Bayesian Hidden Markov Model based approach to transform medical inpatient journeys into a probabilistic space, which can be seen as a richer representation of inpatient journeys to be clustered. Then, using hierarchical clustering on the matrix of similarities, inpatient journeys can be clustered into different categories w.r.t their clinical and temporal characteristics. We evaluated the proposed approach on a real clinical data set pertaining to the unstable angina treatment process. The experimental results reveal that our method can identify and model latent treatment topics underlying in personalized inpatient journeys, and yield impressive clustering quality. PMID:26958200

  7. Signal processing of MEMS gyroscope arrays to improve accuracy using a 1st order Markov for rate signal modeling.

    PubMed

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  8. On the reliability of NMR relaxation data analyses: a Markov Chain Monte Carlo approach.

    PubMed

    Abergel, Daniel; Volpato, Andrea; Coutant, Eloi P; Polimeno, Antonino

    2014-09-01

    The analysis of NMR relaxation data is revisited along the lines of a Bayesian approach. Using a Markov Chain Monte Carlo strategy of data fitting, we investigate conditions under which relaxation data can be effectively interpreted in terms of internal dynamics. The limitations to the extraction of kinetic parameters that characterize internal dynamics are analyzed, and we show that extracting characteristic time scales shorter than a few tens of ps is very unlikely. However, using MCMC methods, reliable estimates of the marginal probability distributions and estimators (average, standard deviations, etc.) can still be obtained for subsets of the model parameters. Thus, unlike more conventional strategies of data analysis, the method avoids a model selection process. In addition, it indicates what information may be extracted from the data, but also what cannot.

  9. A computationally efficient approach for hidden-Markov model-augmented fingerprint-based positioning

    NASA Astrophysics Data System (ADS)

    Roth, John; Tummala, Murali; McEachen, John

    2016-09-01

    This paper presents a computationally efficient approach for mobile subscriber position estimation in wireless networks. A method of data scaling assisted by timing adjust is introduced in fingerprint-based location estimation under a framework which allows for minimising computational cost. The proposed method maintains a comparable level of accuracy to the traditional case where no data scaling is used and is evaluated in a simulated environment under varying channel conditions. The proposed scheme is studied when it is augmented by a hidden-Markov model to match the internal parameters to the channel conditions that present, thus minimising computational cost while maximising accuracy. Furthermore, the timing adjust quantity, available in modern wireless signalling messages, is shown to be able to further reduce computational cost and increase accuracy when available. The results may be seen as a significant step towards integrating advanced position-based modelling with power-sensitive mobile devices.

  10. Segmentation of angiodysplasia lesions in WCE images using a MAP approach with Markov Random Fields.

    PubMed

    Vieira, Pedro M; Goncalves, Bruno; Goncalves, Carla R; Lima, Carlos S

    2016-08-01

    This paper deals with the segmentation of angiodysplasias in wireless capsule endoscopy images. These lesions are the cause of almost 10% of all gastrointestinal bleeding episodes, and its detection using the available software presents low sensitivity. This work proposes an automatic selection of a ROI using an image segmentation module based on the MAP approach where an accelerated version of the EM algorithm is used to iteratively estimate the model parameters. Spatial context is modeled in the prior probability density function using Markov Random Fields. The color space used was CIELab, specially the a component, which highlighted most these type of lesions. The proposed method is the first regarding this specific type of lesions, but when compared to other state-of-the-art segmentation methods, it almost doubles the results.

  11. Robust Filtering for Nonlinear Nonhomogeneous Markov Jump Systems by Fuzzy Approximation Approach.

    PubMed

    Yin, Yanyan; Shi, Peng; Liu, Fei; Teo, Kok Lay; Lim, Cheng-Chew

    2015-09-01

    This paper addresses the problem of robust fuzzy L2-L∞ filtering for a class of uncertain nonlinear discrete-time Markov jump systems (MJSs) with nonhomogeneous jump processes. The Takagi-Sugeno fuzzy model is employed to represent such nonlinear nonhomogeneous MJS with norm-bounded parameter uncertainties. In order to decrease conservation, a polytope Lyapunov function which evolves as a convex function is employed, and then, under the designed mode-dependent and variation-dependent fuzzy filter which includes the membership functions, a sufficient condition is presented to ensure that the filtering error dynamic system is stochastically stable and that it has a prescribed L2-L∞ performance index. Two simulated examples are given to demonstrate the effectiveness and advantages of the proposed techniques.

  12. Modeling Dyadic Processes Using Hidden Markov Models: A Time Series Approach to Mother-Infant Interactions during Infant Immunization

    ERIC Educational Resources Information Center

    Stifter, Cynthia A.; Rovine, Michael

    2015-01-01

    The focus of the present longitudinal study, to examine mother-infant interaction during the administration of immunizations at 2 and 6?months of age, used hidden Markov modelling, a time series approach that produces latent states to describe how mothers and infants work together to bring the infant to a soothed state. Results revealed a…

  13. An information theoretic approach for generating an aircraft avoidance Markov Decision Process

    NASA Astrophysics Data System (ADS)

    Weinert, Andrew J.

    Developing a collision avoidance system that can meet safety standards required of commercial aviation is challenging. A dynamic programming approach to collision avoidance has been developed to optimize and generate logics that are robust to the complex dynamics of the national airspace. The current approach represents the aircraft avoidance problem as Markov Decision Processes and independently optimizes a horizontal and vertical maneuver avoidance logics. This is a result of the current memory requirements for each logic, simply combining the logics will result in a significantly larger representation. The "curse of dimensionality" makes it computationally inefficient and unfeasible to optimize this larger representation. However, existing and future collision avoidance systems have mostly defined the decision process by hand. In response, a simulation-based framework was built to better understand how each potential state quantifies the aircraft avoidance problem with regards to safety and operational components. The framework leverages recent advances in signals processing and database, while enabling the highest fidelity analysis of Monte Carlo aircraft encounter simulations to date. This framework enabled the calculation of how well each state of the decision process quantifies the collision risk and the associated memory requirements. Using this analysis, a collision avoidance logic that leverages both horizontal and vertical actions was built and optimized using this simulation based approach.

  14. Use of a Transition Probability/Markov Approach to Improve Geostatistical of Facies Architecture

    SciTech Connect

    Carle, S.F.

    2000-11-01

    Facies may account for the largest permeability contrasts within the reservoir model at the scale relevant to production. Conditional simulation of the spatial distribution of facies is one of the most important components of building a reservoir model. Geostatistical techniques are widely used to produce realistic and geologically plausible realizations of facies architecture. However, there are two stumbling blocks to the traditional indicator variogram-based approaches: (1) intensive data sets are needed to develop models of spatial variability by empirical curve-fitting to sample indicator (cross-) variograms and to implement ''post-processing'' simulation algorithms; and (2) the prevalent ''sequential indicator simulation'' (SIS) methods do not accurately produce patterns of spatial variability for systems with three or more facies (Seifert and Jensen, 1999). This paper demonstrates an alternative transition probability/Markov approach that emphasizes: (1) Conceptual understanding of the parameters of the spatial variability model, so that geologic insight can support and enhance model development when data are sparse. (2) Mathematical rigor, so that the ''coregionalization'' model (including the spatial cross-correlations) obeys probability law. (3) Consideration of spatial cross-correlation, so that juxtapositional tendencies--how frequently one facies tends to occur adjacent to another facies--are honored.

  15. Probabilistic Approach to Computational Algorithms for Finding Stationary Distributions of Markov Chains.

    DTIC Science & Technology

    1986-10-01

    these theorems to find steady-state solutions of Markov chains are analysed. The results obtained in this way are then applied to quasi birth-death processes. Keywords: computations; algorithms; equalibrium equations.

  16. Gold price effect on stock market: A Markov switching vector error correction approach

    NASA Astrophysics Data System (ADS)

    Wai, Phoong Seuk; Ismail, Mohd Tahir; Kun, Sek Siok

    2014-06-01

    Gold is a popular precious metal where the demand is driven not only for practical use but also as a popular investments commodity. While stock market represents a country growth, thus gold price effect on stock market behavior as interest in the study. Markov Switching Vector Error Correction Models are applied to analysis the relationship between gold price and stock market changes since real financial data always exhibit regime switching, jumps or missing data through time. Besides, there are numerous specifications of Markov Switching Vector Error Correction Models and this paper will compare the intercept adjusted Markov Switching Vector Error Correction Model and intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model to determine the best model representation in capturing the transition of the time series. Results have shown that gold price has a positive relationship with Malaysia, Thailand and Indonesia stock market and a two regime intercept adjusted heteroskedasticity Markov Switching Vector Error Correction Model is able to provide the more significance and reliable result compare to intercept adjusted Markov Switching Vector Error Correction Models.

  17. A Markov decision process approach to temporal modulation of dose fractions in radiation therapy planning.

    PubMed

    Kim, M; Ghate, A; Phillips, M H

    2009-07-21

    The current state of the art in cancer treatment by radiation optimizes beam intensity spatially such that tumors receive high dose radiation whereas damage to nearby healthy tissues is minimized. It is common practice to deliver the radiation over several weeks, where the daily dose is a small constant fraction of the total planned. Such a 'fractionation schedule' is based on traditional models of radiobiological response where normal tissue cells possess the ability to repair sublethal damage done by radiation. This capability is significantly less prominent in tumors. Recent advances in quantitative functional imaging and biological markers are providing new opportunities to measure patient response to radiation over the treatment course. This opens the door for designing fractionation schedules that take into account the patient's cumulative response to radiation up to a particular treatment day in determining the fraction on that day. We propose a novel approach that, for the first time, mathematically explores the benefits of such fractionation schemes. This is achieved by building a stylistic Markov decision process (MDP) model, which incorporates some key features of the problem through intuitive choices of state and action spaces, as well as transition probability and reward functions. The structure of optimal policies for this MDP model is explored through several simple numerical examples.

  18. Strategic level proton therapy patient admission planning: a Markov decision process modeling approach.

    PubMed

    Gedik, Ridvan; Zhang, Shengfan; Rainwater, Chase

    2016-01-25

    A relatively new consideration in proton therapy planning is the requirement that the mix of patients treated from different categories satisfy desired mix percentages. Deviations from these percentages and their impacts on operational capabilities are of particular interest to healthcare planners. In this study, we investigate intelligent ways of admitting patients to a proton therapy facility that maximize the total expected number of treatment sessions (fractions) delivered to patients in a planning period with stochastic patient arrivals and penalize the deviation from the patient mix restrictions. We propose a Markov Decision Process (MDP) model that provides very useful insights in determining the best patient admission policies in the case of an unexpected opening in the facility (i.e., no-shows, appointment cancellations, etc.). In order to overcome the curse of dimensionality for larger and more realistic instances, we propose an aggregate MDP model that is able to approximate optimal patient admission policies using the worded weight aggregation technique. Our models are applicable to healthcare treatment facilities throughout the United States, but are motivated by collaboration with the University of Florida Proton Therapy Institute (UFPTI).

  19. A Markov Chain Monte Carlo Approach to Estimate AIDS after HIV Infection.

    PubMed

    Apenteng, Ofosuhene O; Ismail, Noor Azina

    2015-01-01

    The spread of human immunodeficiency virus (HIV) infection and the resulting acquired immune deficiency syndrome (AIDS) is a major health concern in many parts of the world, and mathematical models are commonly applied to understand the spread of the HIV epidemic. To understand the spread of HIV and AIDS cases and their parameters in a given population, it is necessary to develop a theoretical framework that takes into account realistic factors. The current study used this framework to assess the interaction between individuals who developed AIDS after HIV infection and individuals who did not develop AIDS after HIV infection (pre-AIDS). We first investigated how probabilistic parameters affect the model in terms of the HIV and AIDS population over a period of time. We observed that there is a critical threshold parameter, R0, which determines the behavior of the model. If R0 ≤ 1, there is a unique disease-free equilibrium; if R0 < 1, the disease dies out; and if R0 > 1, the disease-free equilibrium is unstable. We also show how a Markov chain Monte Carlo (MCMC) approach could be used as a supplement to forecast the numbers of reported HIV and AIDS cases. An approach using a Monte Carlo analysis is illustrated to understand the impact of model-based predictions in light of uncertain parameters on the spread of HIV. Finally, to examine this framework and demonstrate how it works, a case study was performed of reported HIV and AIDS cases from an annual data set in Malaysia, and then we compared how these approaches complement each other. We conclude that HIV disease in Malaysia shows epidemic behavior, especially in the context of understanding and predicting emerging cases of HIV and AIDS.

  20. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    NASA Astrophysics Data System (ADS)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2016-10-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach to the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (< ˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe tradeoffs - an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  1. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    NASA Astrophysics Data System (ADS)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2017-01-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  2. Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach

    ERIC Educational Resources Information Center

    Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro

    2005-01-01

    Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…

  3. Dynamic response of mechanical systems to impulse process stochastic excitations: Markov approach

    NASA Astrophysics Data System (ADS)

    Iwankiewicz, R.

    2016-05-01

    Methods for determination of the response of mechanical dynamic systems to Poisson and non-Poisson impulse process stochastic excitations are presented. Stochastic differential and integro-differential equations of motion are introduced. For systems driven by Poisson impulse process the tools of the theory of non-diffusive Markov processes are used. These are: the generalized Itô’s differential rule which allows to derive the differential equations for response moments and the forward integro-differential Chapman-Kolmogorov equation from which the equation governing the probability density of the response is obtained. The relation of Poisson impulse process problems to the theory of diffusive Markov processes is given. For systems driven by a class of non-Poisson (Erlang renewal) impulse processes an exact conversion of the original non-Markov problem into a Markov one is based on the appended Markov chain corresponding to the introduced auxiliary pure jump stochastic process. The derivation of the set of integro-differential equations for response probability density and also a moment equations technique are based on the forward integro-differential Chapman-Kolmogorov equation. An illustrating numerical example is also included.

  4. Techniques for modeling the reliability of fault-tolerant systems with the Markov state-space approach

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1995-01-01

    This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.

  5. Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.

    PubMed

    Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter

    2016-04-01

    Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery.

  6. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    ERIC Educational Resources Information Center

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  7. Fitting timeseries by continuous-time Markov chains: A quadratic programming approach

    SciTech Connect

    Crommelin, D.T. . E-mail: crommelin@cims.nyu.edu; Vanden-Eijnden, E. . E-mail: eve2@cims.nyu.edu

    2006-09-20

    Construction of stochastic models that describe the effective dynamics of observables of interest is an useful instrument in various fields of application, such as physics, climate science, and finance. We present a new technique for the construction of such models. From the timeseries of an observable, we construct a discrete-in-time Markov chain and calculate the eigenspectrum of its transition probability (or stochastic) matrix. As a next step we aim to find the generator of a continuous-time Markov chain whose eigenspectrum resembles the observed eigenspectrum as closely as possible, using an appropriate norm. The generator is found by solving a minimization problem: the norm is chosen such that the object function is quadratic and convex, so that the minimization problem can be solved using quadratic programming techniques. The technique is illustrated on various toy problems as well as on datasets stemming from simulations of molecular dynamics and of atmospheric flows.

  8. Fitting timeseries by continuous-time Markov chains: A quadratic programming approach

    NASA Astrophysics Data System (ADS)

    Crommelin, D. T.; Vanden-Eijnden, E.

    2006-09-01

    Construction of stochastic models that describe the effective dynamics of observables of interest is an useful instrument in various fields of application, such as physics, climate science, and finance. We present a new technique for the construction of such models. From the timeseries of an observable, we construct a discrete-in-time Markov chain and calculate the eigenspectrum of its transition probability (or stochastic) matrix. As a next step we aim to find the generator of a continuous-time Markov chain whose eigenspectrum resembles the observed eigenspectrum as closely as possible, using an appropriate norm. The generator is found by solving a minimization problem: the norm is chosen such that the object function is quadratic and convex, so that the minimization problem can be solved using quadratic programming techniques. The technique is illustrated on various toy problems as well as on datasets stemming from simulations of molecular dynamics and of atmospheric flows.

  9. Hidden Markov models and other machine learning approaches in computational molecular biology

    SciTech Connect

    Baldi, P.

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.

  10. Optimum equipment maintenance/replacement policy. Part 2: Markov decision approach

    NASA Technical Reports Server (NTRS)

    Charng, T.

    1982-01-01

    Dynamic programming was utilized as an alternative optimization technique to determine an optimal policy over a given time period. According to a joint effect of the probabilistic transition of states and the sequence of decision making, the optimal policy is sought such that a set of decisions optimizes the long-run expected average cost (or profit) per unit time. Provision of an alternative measure for the expected long-run total discounted costs is also considered. A computer program based on the concept of the Markov Decision Process was developed and tested. The program code listing, the statement of a sample problem, and the computed results are presented.

  11. Higher-order phase shift reconstruction approach

    SciTech Connect

    Cong Wenxiang; Wang Ge

    2010-10-15

    Purpose: Biological soft tissues encountered in clinical and preclinical imaging mainly consists of atoms of light elements with low atomic numbers and their elemental composition is nearly uniform with little density variation. Hence, x-ray attenuation contrast is relatively poor and cannot achieve satisfactory sensitivity and specificity. In contrast, x-ray phase-contrast provides a new mechanism for soft tissue imaging. The x-ray phase shift of soft tissues is about a thousand times greater than the x-ray absorption over the diagnostic x-ray energy range, yielding a higher signal-to-noise ratio than the attenuation contrast counterpart. Thus, phase-contrast imaging is a promising technique to reveal detailed structural variation in soft tissues, offering a high contrast resolution between healthy and malignant tissues. Here the authors develop a novel phase retrieval method to reconstruct the phase image on the object plane from the intensity measurements. The reconstructed phase image is a projection of the phase shift induced by an object and serves as input to reconstruct the 3D refractive index distribution inside the object using a tomographic reconstruction algorithm. Such x-ray refractive index images can reveal structural features in soft tissues, with excellent resolution differentiating healthy and malignant tissues. Methods: A novel phase retrieval approach is proposed to reconstruct an x-ray phase image of an object based on the paraxial Fresnel-Kirchhoff diffraction theory. A primary advantage of the authors' approach is higher-order accuracy over that with the conventional linear approximation models, relaxing the current restriction of slow phase variation. The nonlinear terms in the autocorrelation equation of the Fresnel diffraction pattern are eliminated using intensity images measured at different distances in the Fresnel diffraction region, simplifying the phase reconstruction to a linear inverse problem. Numerical experiments are performed

  12. Scene estimation from speckled synthetic aperture radar imagery: Markov-random-field approach.

    PubMed

    Lankoande, Ousseini; Hayat, Majeed M; Santhanam, Balu

    2006-06-01

    A novel Markov-random-field model for speckled synthetic aperture radar (SAR) imagery is derived according to the physical, spatial statistical properties of speckle noise in coherent imaging. A convex Gibbs energy function for speckled images is derived and utilized to perform speckle-compensating image estimation. The image estimation is formed by computing the conditional expectation of the noisy image at each pixel given its neighbors, which is further expressed in terms of the derived Gibbs energy function. The efficacy of the proposed technique, in terms of reducing speckle noise while preserving spatial resolution, is studied by using both real and simulated SAR imagery. Using a number of commonly used metrics, the performance of the proposed technique is shown to surpass that of existing speckle-noise-filtering methods such as the Gamma MAP, the modified Lee, and the enhanced Frost.

  13. A Markov random field approach for modeling spatio-temporal evolution of microstructures

    NASA Astrophysics Data System (ADS)

    Acar, Pinar; Sundararaghavan, Veera

    2016-10-01

    The following problem is addressed: ‘Can one synthesize microstructure evolution over a large area given experimental movies measured over smaller regions?’ Our input is a movie of microstructure evolution over a small sample window. A Markov random field (MRF) algorithm is developed that uses this data to estimate the evolution of microstructure over a larger region. Unlike the standard microstructure reconstruction problem based on stationary images, the present algorithm is also able to reconstruct time-evolving phenomena such as grain growth. Such an algorithm would decrease the cost of full-scale microstructure measurements by coupling mathematical estimation with targeted small-scale spatiotemporal measurements. The grain size, shape and orientation distribution statistics of synthesized polycrystalline microstructures at different times are compared with the original movie to verify the method.

  14. The Influence of Hydroxylation on Maintaining CpG Methylation Patterns: A Hidden Markov Model Approach

    PubMed Central

    Ficz, Gabriella; Wolf, Verena; Walter, Jörn

    2016-01-01

    DNA methylation and demethylation are opposing processes that when in balance create stable patterns of epigenetic memory. The control of DNA methylation pattern formation by replication dependent and independent demethylation processes has been suggested to be influenced by Tet mediated oxidation of 5mC. Several alternative mechanisms have been proposed suggesting that 5hmC influences either replication dependent maintenance of DNA methylation or replication independent processes of active demethylation. Using high resolution hairpin oxidative bisulfite sequencing data, we precisely determine the amount of 5mC and 5hmC and model the contribution of 5hmC to processes of demethylation in mouse ESCs. We develop an extended hidden Markov model capable of accurately describing the regional contribution of 5hmC to demethylation dynamics. Our analysis shows that 5hmC has a strong impact on replication dependent demethylation, mainly by impairing methylation maintenance. PMID:27224554

  15. Continuous time Markov chain approaches for analyzing transtheoretical models of health behavioral change: A case study and comparison of model estimations.

    PubMed

    Ma, Junsheng; Chan, Wenyaw; Tilley, Barbara C

    2016-04-04

    Continuous time Markov chain models are frequently employed in medical research to study the disease progression but are rarely applied to the transtheoretical model, a psychosocial model widely used in the studies of health-related outcomes. The transtheoretical model often includes more than three states and conceptually allows for all possible instantaneous transitions (referred to as general continuous time Markov chain). This complicates the likelihood function because it involves calculating a matrix exponential that may not be simplified for general continuous time Markov chain models. We undertook a Bayesian approach wherein we numerically evaluated the likelihood using ordinary differential equation solvers available from thegnuscientific library. We compared our Bayesian approach with the maximum likelihood method implemented with theRpackageMSM Our simulation study showed that the Bayesian approach provided more accurate point and interval estimates than the maximum likelihood method, especially in complex continuous time Markov chain models with five states. When applied to data from a four-state transtheoretical model collected from a nutrition intervention study in the next step trial, we observed results consistent with the results of the simulation study. Specifically, the two approaches provided comparable point estimates and standard errors for most parameters, but the maximum likelihood offered substantially smaller standard errors for some parameters. Comparable estimates of the standard errors are obtainable from packageMSM, which works only when the model estimation algorithm converges.

  16. Link between unemployment and crime in the US: a Markov-Switching approach.

    PubMed

    Fallahi, Firouz; Rodríguez, Gabriel

    2014-05-01

    This study has two goals. The first is to use Markov Switching models to identify and analyze the cycles in the unemployment rate and four different types of property-related criminal activities in the US. The second is to apply the nonparametric concordance index of Harding and Pagan (2006) to determine the correlation between the cycles of unemployment rate and property crimes. Findings show that there is a positive but insignificant relationship between the unemployment rate, burglary, larceny, and robbery. However, the unemployment rate has a significant and negative (i.e., a counter-cyclical) relationship with motor-vehicle theft. Therefore, more motor-vehicle thefts occur during economic expansions relative to contractions. Next, we divide the sample into three different subsamples to examine the consistency of the findings. The results show that the co-movements between the unemployment rate and property crimes during recession periods are much weaker, when compared with that of the normal periods of the US economy.

  17. Multi-Resolution Markov-Chain-Monte-Carlo Approach for System Identification with an Application to Finite-Element Models

    SciTech Connect

    Johannesson, G; Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G

    2005-02-07

    Estimating unknown system configurations/parameters by combining system knowledge gained from a computer simulation model on one hand and from observed data on the other hand is challenging. An example of such inverse problem is detecting and localizing potential flaws or changes in a structure by using a finite-element model and measured vibration/displacement data. We propose a probabilistic approach based on Bayesian methodology. This approach does not only yield a single best-guess solution, but a posterior probability distribution over the parameter space. In addition, the Bayesian approach provides a natural framework to accommodate prior knowledge. A Markov chain Monte Carlo (MCMC) procedure is proposed to generate samples from the posterior distribution (an ensemble of likely system configurations given the data). The MCMC procedure proposed explores the parameter space at different resolutions (scales), resulting in a more robust and efficient procedure. The large-scale exploration steps are carried out using coarser-resolution finite-element models, yielding a considerable decrease in computational time, which can be a crucial for large finite-element models. An application is given using synthetic displacement data from a simple cantilever beam with MCMC exploration carried out at three different resolutions.

  18. Biomedical system based on the Discrete Hidden Markov Model using the Rocchio-Genetic approach for the classification of internal carotid artery Doppler signals.

    PubMed

    Uğuz, Harun; Güraksın, Gür Emre; Ergün, Uçman; Saraçoğlu, Rıdvan

    2011-07-01

    When the maximum likelihood approach (ML) is used during the calculation of the Discrete Hidden Markov Model (DHMM) parameters, DHMM parameters of the each class are only calculated using the training samples (positive training samples) of the same class. The training samples (negative training samples) not belonging to that class are not used in the calculation of DHMM model parameters. With the aim of supplying that deficiency, by involving the training samples of all classes in calculating processes, a Rocchio algorithm based approach is suggested. During the calculation period, in order to determine the most appropriate values of parameters for adjusting the relative effect of the positive and negative training samples, a Genetic algorithm is used as an optimization technique. The purposed method is used to classify the internal carotid artery Doppler signals recorded from 136 patients as well as of 55 healthy people. Our proposed method reached 97.38% classification accuracy with fivefold cross-validation (CV) technique. The classification results showed that the proposed method was effective for the classification of internal carotid artery Doppler signals.

  19. Permutation approach to finite-alphabet stationary stochastic processes based on the duality between values and orderings

    NASA Astrophysics Data System (ADS)

    Haruna, T.; Nakajima, K.

    2013-06-01

    The duality between values and orderings is a powerful tool to discuss relationships between various information-theoretic measures and their permutation analogues for discrete-time finite-alphabet stationary stochastic processes (SSPs). Applying it to output processes of hidden Markov models with ergodic internal processes, we have shown in our previous work that the excess entropy and the transfer entropy rate coincide with their permutation analogues. In this paper, we discuss two permutation characterizations of the two measures for general ergodic SSPs not necessarily having the Markov property assumed in our previous work. In the first approach, we show that the excess entropy and the transfer entropy rate of an ergodic SSP can be obtained as the limits of permutation analogues of them for the N-th order approximation by hidden Markov models, respectively. In the second approach, we employ the modified permutation partition of the set of words which considers equalities of symbols in addition to permutations of words. We show that the excess entropy and the transfer entropy rate of an ergodic SSP are equal to their modified permutation analogues, respectively.

  20. Markov Chains and Chemical Processes

    ERIC Educational Resources Information Center

    Miller, P. J.

    1972-01-01

    Views as important the relating of abstract ideas of modern mathematics now being taught in the schools to situations encountered in the sciences. Describes use of matrices and Markov chains to study first-order processes. (Author/DF)

  1. Serial Order: A Parallel Distributed Processing Approach.

    ERIC Educational Resources Information Center

    Jordan, Michael I.

    Human behavior shows a variety of serially ordered action sequences. This paper presents a theory of serial order which describes how sequences of actions might be learned and performed. In this theory, parallel interactions across time (coarticulation) and parallel interactions across space (dual-task interference) are viewed as two aspects of a…

  2. Performance of Markov chain-Monte Carlo approaches for mapping genes in oligogenic models with an unknown number of loci.

    PubMed

    Lee, J K; Thomas, D C

    2000-11-01

    Markov chain-Monte Carlo (MCMC) techniques for multipoint mapping of quantitative trait loci have been developed on nuclear-family and extended-pedigree data. These methods are based on repeated sampling-peeling and gene dropping of genotype vectors and random sampling of each of the model parameters from their full conditional distributions, given phenotypes, markers, and other model parameters. We further refine such approaches by improving the efficiency of the marker haplotype-updating algorithm and by adopting a new proposal for adding loci. Incorporating these refinements, we have performed an extensive simulation study on simulated nuclear-family data, varying the number of trait loci, family size, displacement, and other segregation parameters. Our simulation studies show that our MCMC algorithm identifies the locations of the true trait loci and estimates their segregation parameters well-provided that the total number of sibship pairs in the pedigree data is reasonably large, heritability of each individual trait locus is not too low, and the loci are not too close together. Our MCMC algorithm was shown to be significantly more efficient than LOKI (Heath 1997) in our simulation study using nuclear-family data.

  3. A hidden Markov model approach to analyze longitudinal ternary outcomes when some observed states are possibly misclassified.

    PubMed

    Benoit, Julia S; Chan, Wenyaw; Luo, Sheng; Yeh, Hung-Wen; Doody, Rachelle

    2016-04-30

    Understanding the dynamic disease process is vital in early detection, diagnosis, and measuring progression. Continuous-time Markov chain (CTMC) methods have been used to estimate state-change intensities but challenges arise when stages are potentially misclassified. We present an analytical likelihood approach where the hidden state is modeled as a three-state CTMC model allowing for some observed states to be possibly misclassified. Covariate effects of the hidden process and misclassification probabilities of the hidden state are estimated without information from a 'gold standard' as comparison. Parameter estimates are obtained using a modified expectation-maximization (EM) algorithm, and identifiability of CTMC estimation is addressed. Simulation studies and an application studying Alzheimer's disease caregiver stress-levels are presented. The method was highly sensitive to detecting true misclassification and did not falsely identify error in the absence of misclassification. In conclusion, we have developed a robust longitudinal method for analyzing categorical outcome data when classification of disease severity stage is uncertain and the purpose is to study the process' transition behavior without a gold standard.

  4. Comparison of reversible-jump Markov-chain-Monte-Carlo learning approach with other methods for missing enzyme identification.

    PubMed

    Geng, Bo; Zhou, Xiaobo; Zhu, Jinmin; Hung, Y S; Wong, Stephen T C

    2008-04-01

    Computational identification of missing enzymes plays a significant role in accurate and complete reconstruction of metabolic network for both newly sequenced and well-studied organisms. For a metabolic reaction, given a set of candidate enzymes identified according to certain biological evidences, a powerful mathematical model is required to predict the actual enzyme(s) catalyzing the reactions. In this study, several plausible predictive methods are considered for the classification problem in missing enzyme identification, and comparisons are performed with an aim to identify a method with better performance than the Bayesian model used in previous work. In particular, a regression model consisting of a linear term and a nonlinear term is proposed to apply to the problem, in which the reversible jump Markov-chain-Monte-Carlo (MCMC) learning technique (developed in [Andrieu C, Freitas Nando de, Doucet A. Robust full Bayesian learning for radial basis networks 2001;13:2359-407.]) is adopted to estimate the model order and the parameters. We evaluated the models using known reactions in Escherichia coli, Mycobacterium tuberculosis, Vibrio cholerae and Caulobacter cresentus bacteria, as well as one eukaryotic organism, Saccharomyces Cerevisiae. Although support vector regression also exhibits comparable performance in this application, it was demonstrated that the proposed model achieves favorable prediction performance, particularly sensitivity, compared with the Bayesian method.

  5. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti

    2016-10-01

    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  6. ADM-CLE Approach for Detecting Slow Variables in Continuous Time Markov Chains and Dynamic Data

    DTIC Science & Technology

    2015-04-01

    method for detecting intrinsic slow variables in high-dimensional stochastic chemical reaction networks is developed and analyzed. It combines anisotropic...diffusion maps (ADM) with approximations based on the chemical Langevin equation (CLE). The resulting approach, called ADM-CLE, has the potential of...being more efficient than the ADM method for a large class of chemical reaction systems, because it replaces the computationally most expensive step of

  7. Simulating oligomerization at experimental concentrations and long timescales: A Markov state model approach

    NASA Astrophysics Data System (ADS)

    Kelley, Nicholas W.; Vishal, V.; Krafft, Grant A.; Pande, Vijay S.

    2008-12-01

    Here, we present a novel computational approach for describing the formation of oligomeric assemblies at experimental concentrations and timescales. We propose an extension to the Markovian state model approach, where one includes low concentration oligomeric states analytically. This allows simulation on long timescales (seconds timescale) and at arbitrarily low concentrations (e.g., the micromolar concentrations found in experiments), while still using an all-atom model for protein and solvent. As a proof of concept, we apply this methodology to the oligomerization of an Aβ peptide fragment (Aβ21-43). Aβ oligomers are now widely recognized as the primary neurotoxic structures leading to Alzheimer's disease. Our computational methods predict that Aβ trimers form at micromolar concentrations in 10ms, while tetramers form 1000 times more slowly. Moreover, the simulation results predict specific intermonomer contacts present in the oligomer ensemble as well as putative structures for small molecular weight oligomers. Based on our simulations and statistical models, we propose a novel mutation to stabilize the trimeric form of Aβ in an experimentally verifiable manner.

  8. Markov blanket-based approach for learning multi-dimensional Bayesian network classifiers: an application to predict the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39).

    PubMed

    Borchani, Hanen; Bielza, Concha; Martı Nez-Martı N, Pablo; Larrañaga, Pedro

    2012-12-01

    Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson's patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson's disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables.

  9. Abstraction Augmented Markov Models.

    PubMed

    Caragea, Cornelia; Silvescu, Adrian; Caragea, Doina; Honavar, Vasant

    2010-12-13

    High accuracy sequence classification often requires the use of higher order Markov models (MMs). However, the number of MM parameters increases exponentially with the range of direct dependencies between sequence elements, thereby increasing the risk of overfitting when the data set is limited in size. We present abstraction augmented Markov models (AAMMs) that effectively reduce the number of numeric parameters of k(th) order MMs by successively grouping strings of length k (i.e., k-grams) into abstraction hierarchies. We evaluate AAMMs on three protein subcellular localization prediction tasks. The results of our experiments show that abstraction makes it possible to construct predictive models that use significantly smaller number of features (by one to three orders of magnitude) as compared to MMs. AAMMs are competitive with and, in some cases, significantly outperform MMs. Moreover, the results show that AAMMs often perform significantly better than variable order Markov models, such as decomposed context tree weighting, prediction by partial match, and probabilistic suffix trees.

  10. A Markov Chain Monte Carlo Inversion Approach For Inverting InSAR Data With Application To Subsurface CO2 Injection

    NASA Astrophysics Data System (ADS)

    Ramirez, A. L.; Foxall, W.

    2011-12-01

    Surface displacements caused by reservoir pressure perturbations resulting from CO2 injection can often be measured by geodetic methods such as InSAR, tilt and GPS. We have developed a Markov Chain Monte Carlo (MCMC) approach to invert surface displacements measured by InSAR to map the pressure distribution associated with CO2 injection at the In Salah Krechba field, Algeria. The MCMC inversion entails sampling the solution space by proposing a series of trial 3D pressure-plume models. In the case of In Salah, the range of allowable models is constrained by prior information provided by well and geophysical data for the reservoir and possible fluid pathways in the overburden, and injection pressures and volumes. Each trial pressure distribution source is run through a (mathematical) forward model to calculate a set of synthetic surface deformation data. The likelihood that a particular proposal represents the true source is determined from the fit of the calculated data to the InSAR measurements, and those having higher likelihoods are passed to the posterior distribution. This procedure is repeated over typically ~104 - 105 trials until the posterior distribution converges to a stable solution. The solution to each stochastic inversion is in the form of Bayesian posterior probability density function (pdf) over the range of the alternative models that are consistent with the measured data and prior information. Therefore, the solution provides not only the highest likelihood model but also a realistic estimate of the solution uncertainty. Our InSalah work considered three flow model alternatives: 1) The first model assumed that the CO2 saturation and fluid pressure changes were confined to the reservoir; 2) the second model allowed the perturbations to occur also in a damage zone inferred in the lower caprock from 3D seismic surveys; and 3) the third model allowed fluid pressure changes anywhere within the reservoir and overburden. Alternative (2) yielded optimal

  11. Improved spike-sorting by modeling firing statistics and burst-dependent spike amplitude attenuation: a Markov chain Monte Carlo approach.

    PubMed

    Pouzat, Christophe; Delescluse, Matthieu; Viot, Pascal; Diebolt, Jean

    2004-06-01

    Spike-sorting techniques attempt to classify a series of noisy electrical waveforms according to the identity of the neurons that generated them. Existing techniques perform this classification ignoring several properties of actual neurons that can ultimately improve classification performance. In this study, we propose a more realistic spike train generation model. It incorporates both a description of "nontrivial" (i.e., non-Poisson) neuronal discharge statistics and a description of spike waveform dynamics (e.g., the events amplitude decays for short interspike intervals). We show that this spike train generation model is analogous to a one-dimensional Potts spin-glass model. We can therefore tailor to our particular case the computational methods that have been developed in fields where Potts models are extensively used, including statistical physics and image restoration. These methods are based on the construction of a Markov chain in the space of model parameters and spike train configurations, where a configuration is defined by specifying a neuron of origin for each spike. This Markov chain is built such that its unique stationary density is the posterior density of model parameters and configurations given the observed data. A Monte Carlo simulation of the Markov chain is then used to estimate the posterior density. We illustrate the way to build the transition matrix of the Markov chain with a simple, but realistic, model for data generation. We use simulated data to illustrate the performance of the method and to show that this approach can easily cope with neurons firing doublets of spikes and/or generating spikes with highly dynamic waveforms. The method cannot automatically find the "correct" number of neurons in the data. User input is required for this important problem and we illustrate how this can be done. We finally discuss further developments of the method.

  12. Markov Analysis of Sleep Dynamics

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.

    2009-05-01

    A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.

  13. Perturbative approach for non local and high order derivative theories

    SciTech Connect

    Avilez, Ana A.; Vergara, J. David

    2009-04-20

    We propose a reduction method of classical phase space of high order derivative theories in singular and non singular cases. The mechanism is to reduce the high order phase space by imposing suplementary constraints, such that the evolution takes place in a submanifold where high order degrees of freedom are absent. The reduced theory is ordinary and is cured of the usual high order theories diseases, it approaches well low energy dynamics.

  14. Markov Chain Monte Carlo approaches to analysis of genetic and environmental components of human developmental change and G x E interaction.

    PubMed

    Eaves, Lindon; Erkanli, Alaattin

    2003-05-01

    The linear structural model has provided the statistical backbone of the analysis of twin and family data for 25 years. A new generation of questions cannot easily be forced into the framework of current approaches to modeling and data analysis because they involve nonlinear processes. Maximizing the likelihood with respect to parameters of such nonlinear models is often cumbersome and does not yield easily to current numerical methods. The application of Markov Chain Monte Carlo (MCMC) methods to modeling the nonlinear effects of genes and environment in MZ and DZ twins is outlined. Nonlinear developmental change and genotype x environment interaction in the presence of genotype-environment correlation are explored in simulated twin data. The MCMC method recovers the simulated parameters and provides estimates of error and latent (missing) trait values. Possible limitations of MCMC methods are discussed. Further studies are necessary explore the value of an approach that could extend the horizons of research in developmental genetic epidemiology.

  15. Markov stochasticity coordinates

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-01-01

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method-termed Markov Stochasticity Coordinates-is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  16. An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes

    ERIC Educational Resources Information Center

    Kapland, David

    2008-01-01

    This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…

  17. Improving the Robustness of Local Network Alignment: Design and Extensive Assessment of a Markov Clustering-Based Approach.

    PubMed

    Mina, Marco; Guzzi, Pietro Hiram

    2014-01-01

    The analysis of protein behavior at the network level had been applied to elucidate the mechanisms of protein interaction that are similar in different species. Published network alignment algorithms proved to be able to recapitulate known conserved modules and protein complexes, and infer new conserved interactions confirmed by wet lab experiments. In the meantime, however, a plethora of continuously evolving protein-protein interaction (PPI) data sets have been developed, each featuring different levels of completeness and reliability. For instance, algorithms performance may vary significantly when changing the data set used in their assessment. Moreover, existing papers did not deeply investigate the robustness of alignment algorithms. For instance, some algorithms performances vary significantly when changing the data set used in their assessment. In this work, we design an extensive assessment of current algorithms discussing the robustness of the results on the basis of input networks. We also present AlignMCL, a local network alignment algorithm based on an improved model of alignment graph and Markov Clustering. AlignMCL performs better than other state-of-the-art local alignment algorithms over different updated data sets. In addition, AlignMCL features high levels of robustness, producing similar results regardless the selected data set.

  18. The Fate of Priority Areas for Conservation in Protected Areas: A Fine-Scale Markov Chain Approach

    NASA Astrophysics Data System (ADS)

    Tattoni, Clara; Ciolli, Marco; Ferretti, Fabrizio

    2011-02-01

    Park managers in alpine areas must deal with the increase in forest coverage that has been observed in most European mountain areas, where traditional farming and agricultural practices have been abandoned. The aim of this study is to develop a fine-scale model of a broad area to support the managers of Paneveggio Nature Park (Italy) in conservation planning by focusing on the fate of priority areas for conservation in the next 50-100 years. GIS analyses were performed to assess the afforestation dynamic over time using two historical maps (from 1859 and 1936) and a series of aerial photographs and ortho-photos (taken from 1954 to 2006) covering a time span of 150 years. The results show an increase in the forest surface area of about 35%. Additionally, the forest became progressively more compact and less fragmented, with a consequent loss of ecotones and open habitats that are important for biodiversity. Markov chain-cellular automata models were used to project future changes, evaluating the effects on a habitat scale. Simulations show that some habitats defined as priority by the EU Habitat Directive will be compromised by the forest expansion by 2050 and suffer a consistent loss by 2100. This protocol, applied to other areas, can be used for designing long-term management measures with a focus on habitats where conservation status is at risk.

  19. New Markov Model Approaches to Deciphering Microbial Genome Function and Evolution: Comparative Genomics of Laterally Transferred Genes

    SciTech Connect

    Borodovsky, M.

    2013-04-11

    Algorithmic methods for gene prediction have been developed and successfully applied to many different prokaryotic genome sequences. As the set of genes in a particular genome is not homogeneous with respect to DNA sequence composition features, the GeneMark.hmm program utilizes two Markov models representing distinct classes of protein coding genes denoted "typical" and "atypical". Atypical genes are those whose DNA features deviate significantly from those classified as typical and they represent approximately 10% of any given genome. In addition to the inherent interest of more accurately predicting genes, the atypical status of these genes may also reflect their separate evolutionary ancestry from other genes in that genome. We hypothesize that atypical genes are largely comprised of those genes that have been relatively recently acquired through lateral gene transfer (LGT). If so, what fraction of atypical genes are such bona fide LGTs? We have made atypical gene predictions for all fully completed prokaryotic genomes; we have been able to compare these results to other "surrogate" methods of LGT prediction.

  20. A model-based approach to gene clustering with missing observation reconstruction in a Markov random field framework.

    PubMed

    Blanchet, Juliette; Vignes, Matthieu

    2009-03-01

    The different measurement techniques that interrogate biological systems provide means for monitoring the behavior of virtually all cell components at different scales and from complementary angles. However, data generated in these experiments are difficult to interpret. A first difficulty arises from high-dimensionality and inherent noise of such data. Organizing them into meaningful groups is then highly desirable to improve our knowledge of biological mechanisms. A more accurate picture can be obtained when accounting for dependencies between components (e.g., genes) under study. A second difficulty arises from the fact that biological experiments often produce missing values. When it is not ignored, the latter issue has been solved by imputing the expression matrix prior to applying traditional analysis methods. Although helpful, this practice can lead to unsound results. We propose in this paper a statistical methodology that integrates individual dependencies in a missing data framework. More explicitly, we present a clustering algorithm dealing with incomplete data in a Hidden Markov Random Field context. This tackles the missing value issue in a probabilistic framework and still allows us to reconstruct missing observations a posteriori without imposing any pre-processing of the data. Experiments on synthetic data validate the gain in using our method, and analysis of real biological data shows its potential to extract biological knowledge.

  1. BCFtools/RoH: a hidden Markov model approach for detecting autozygosity from next-generation sequencing data

    PubMed Central

    Narasimhan, Vagheesh; Danecek, Petr; Scally, Aylwyn; Xue, Yali; Tyler-Smith, Chris; Durbin, Richard

    2016-01-01

    Summary: Runs of homozygosity (RoHs) are genomic stretches of a diploid genome that show identical alleles on both chromosomes. Longer RoHs are unlikely to have arisen by chance but are likely to denote autozygosity, whereby both copies of the genome descend from the same recent ancestor. Early tools to detect RoH used genotype array data, but substantially more information is available from sequencing data. Here, we present and evaluate BCFtools/RoH, an extension to the BCFtools software package, that detects regions of autozygosity in sequencing data, in particular exome data, using a hidden Markov model. By applying it to simulated data and real data from the 1000 Genomes Project we estimate its accuracy and show that it has higher sensitivity and specificity than existing methods under a range of sequencing error rates and levels of autozygosity. Availability and implementation: BCFtools/RoH and its associated binary/source files are freely available from https://github.com/samtools/BCFtools. Contact: vn2@sanger.ac.uk or pd3@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26826718

  2. Indexed semi-Markov process for wind speed modeling.

    NASA Astrophysics Data System (ADS)

    Petroni, F.; D'Amico, G.; Prattico, F.

    2012-04-01

    Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. In a previous work we proposed different semi-Markov models, showing their ability to reproduce the autocorrelation structures of wind speed data. In that paper we showed also that the autocorrelation is higher with respect to the Markov model. Unfortunately this autocorrelation was still too small compared to the empirical one. In order to overcome the problem of low autocorrelation, in this paper we propose an indexed semi-Markov model. More precisely we assume that wind speed is described by a discrete time homogeneous semi-Markov process. We introduce a memory index which takes into account the periods of different wind activities. With this model the statistical characteristics of wind speed are faithfully reproduced. The wind is a very unstable phenomenon characterized by a sequence of lulls and sustained speeds, and a good wind generator must be able to reproduce such sequences. To check the validity of the predictive semi-Markovian model, the persistence of synthetic winds were calculated, then averaged and computed. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and the time lagged autocorrelation is used to compare statistical properties of the proposed models with those of real data and also with a time series generated though a simple Markov chain. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic generation of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating

  3. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    SciTech Connect

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  4. Modelling and analyzing the watershed dynamics using Cellular Automata (CA)-Markov model - A geo-information based approach

    NASA Astrophysics Data System (ADS)

    Behera, Mukunda D.; Borate, Santosh N.; Panda, Sudhindra N.; Behera, Priti R.; Roy, Partha S.

    2012-08-01

    Improper practices of land use and land cover (LULC) including deforestation, expansion of agriculture and infrastructure development are deteriorating watershed conditions. Here, we have utilized remote sensing and GIS tools to study LULC dynamics using Cellular Automata (CA)-Markov model and predicted the future LULC scenario, in terms of magnitude and direction, based on past trend in a hydrological unit, Choudwar watershed, India. By analyzing the LULC pattern during 1972, 1990, 1999 and 2005 using satellite-derived maps, we observed that the biophysical and socio-economic drivers including residential/industrial development, road-rail and settlement proximity have influenced the spatial pattern of the watershed LULC, leading to an accretive linear growth of agricultural and settlement areas. The annual rate of increase from 1972 to 2004 in agriculture land, settlement was observed to be 181.96, 9.89 ha/year, respectively, while decrease in forest, wetland and marshy land were 91.22, 27.56 and 39.52 ha/year, respectively. Transition probability and transition area matrix derived using inputs of (i) residential/industrial development and (ii) proximity to transportation network as the major causes. The predicted LULC scenario for the year 2014, with reasonably good accuracy would provide useful inputs to the LULC planners for effective management of the watershed. The study is a maiden attempt that revealed agricultural expansion is the main driving force for loss of forest, wetland and marshy land in the Choudwar watershed and has the potential to continue in future. The forest in lower slopes has been converted to agricultural land and may soon take a call on forests occurring on higher slopes. Our study utilizes three time period changes to better account for the trend and the modelling exercise; thereby advocates for better agricultural practices with additional energy subsidy to arrest further forest loss and LULC alternations.

  5. Structure of ordered coaxial and scroll nanotubes: general approach.

    PubMed

    Khalitov, Zufar; Khadiev, Azat; Valeeva, Diana; Pashin, Dmitry

    2016-01-01

    The explicit formulas for atomic coordinates of multiwalled coaxial and cylindrical scroll nanotubes with ordered structure are developed on the basis of a common oblique lattice. According to this approach, a nanotube is formed by transfer of its bulk analogue structure onto a cylindrical surface (with a circular or spiral cross section) and the chirality indexes of the tube are expressed in the number of unit cells. The monoclinic polytypic modifications of ordered coaxial and scroll nanotubes are also discussed and geometrical conditions of their formation are analysed. It is shown that tube radii of ordered multiwalled coaxial nanotubes are multiples of the layer thickness, and the initial turn radius of the orthogonal scroll nanotube is a multiple of the same parameter or its half.

  6. Anatomy Ontology Matching Using Markov Logic Networks

    PubMed Central

    Li, Chunhua; Zhao, Pengpeng; Wu, Jian; Cui, Zhiming

    2016-01-01

    The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment. PMID:27382498

  7. Application and Evaluation of a Snowmelt Runoff Model in the Tamor River Basin, Eastern Himalaya Using a Markov Chain Monte Carlo (MCMC) Data Assimilation Approach

    NASA Technical Reports Server (NTRS)

    Panday, Prajjwal K.; Williams, Christopher A.; Frey, Karen E.; Brown, Molly E.

    2013-01-01

    Previous studies have drawn attention to substantial hydrological changes taking place in mountainous watersheds where hydrology is dominated by cryospheric processes. Modelling is an important tool for understanding these changes but is particularly challenging in mountainous terrain owing to scarcity of ground observations and uncertainty of model parameters across space and time. This study utilizes a Markov Chain Monte Carlo data assimilation approach to examine and evaluate the performance of a conceptual, degree-day snowmelt runoff model applied in the Tamor River basin in the eastern Nepalese Himalaya. The snowmelt runoff model is calibrated using daily streamflow from 2002 to 2006 with fairly high accuracy (average Nash-Sutcliffe metric approx. 0.84, annual volume bias <3%). The Markov Chain Monte Carlo approach constrains the parameters to which the model is most sensitive (e.g. lapse rate and recession coefficient) and maximizes model fit and performance. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall compared with simulations using observed station precipitation. The average snowmelt contribution to total runoff in the Tamor River basin for the 2002-2006 period is estimated to be 29.7+/-2.9% (which includes 4.2+/-0.9% from snowfall that promptly melts), whereas 70.3+/-2.6% is attributed to contributions from rainfall. On average, the elevation zone in the 4000-5500m range contributes the most to basin runoff, averaging 56.9+/-3.6% of all snowmelt input and 28.9+/-1.1% of all rainfall input to runoff. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall versus snowmelt compared with simulations using observed station precipitation. Model experiments indicate that the hydrograph itself does not constrain estimates of snowmelt versus rainfall contributions to total outflow but that this derives from the degree

  8. Stochastic seismic tomography by interacting Markov chains

    NASA Astrophysics Data System (ADS)

    Bottero, Alexis; Gesret, Alexandrine; Romary, Thomas; Noble, Mark; Maisons, Christophe

    2016-10-01

    Markov chain Monte Carlo sampling methods are widely used for non-linear Bayesian inversion where no analytical expression for the forward relation between data and model parameters is available. Contrary to the linear(ized) approaches, they naturally allow to evaluate the uncertainties on the model found. Nevertheless their use is problematic in high-dimensional model spaces especially when the computational cost of the forward problem is significant and/or the a posteriori distribution is multimodal. In this case, the chain can stay stuck in one of the modes and hence not provide an exhaustive sampling of the distribution of interest. We present here a still relatively unknown algorithm that allows interaction between several Markov chains at different temperatures. These interactions (based on importance resampling) ensure a robust sampling of any posterior distribution and thus provide a way to efficiently tackle complex fully non-linear inverse problems. The algorithm is easy to implement and is well adapted to run on parallel supercomputers. In this paper, the algorithm is first introduced and applied to a synthetic multimodal distribution in order to demonstrate its robustness and efficiency compared to a simulated annealing method. It is then applied in the framework of first arrival traveltime seismic tomography on real data recorded in the context of hydraulic fracturing. To carry out this study a wavelet-based adaptive model parametrization has been used. This allows to integrate the a priori information provided by sonic logs and to reduce optimally the dimension of the problem.

  9. A multichannel Markov random field approach for automated segmentation of breast cancer tumor in DCE-MRI data using kinetic observation model.

    PubMed

    Ashraf, Ahmed B; Gavenonis, Sara; Daye, Dania; Mies, Carolyn; Feldman, Michael; Rosen, Mark; Kontos, Despina

    2011-01-01

    We present a multichannel extension of Markov random fields (MRFs) for incorporating multiple feature streams in the MRF model. We prove that for making inference queries, any multichannel MRF can be reduced to a single channel MRF provided features in different channels are conditionally independent given the hidden variable, Using this result we incorporate kinetic feature maps derived from breast DCE MRI into the observation model of MRF for tumor segmentation. Our algorithm achieves an ROC AUC of 0.97 for tumor segmentation, We present a comparison against the commonly used approach of fuzzy C-means (FCM) and the more recent method of running FCM on enhancement variance features (FCM-VES). These previous methods give a lower AUC of 0.86 and 0.60 respectively, indicating the superiority of our algorithm. Finally, we investigate the effect of superior segmentation on predicting breast cancer recurrence using kinetic DCE MRI features from the segmented tumor regions. A linear prediction model shows significant prediction improvement when segmenting the tumor using the proposed method, yielding a correlation coefficient r = 0.78 (p < 0.05) to validated cancer recurrence probabilities, compared to 0.63 and 0.45 when using FCM and FCM-VES respectively.

  10. Challenges in detecting genomic copy number aberrations using next-generation sequencing data and the eXome Hidden Markov Model: a clinical exome-first diagnostic approach

    PubMed Central

    Yamamoto, Toshiyuki; Shimojima, Keiko; Ondo, Yumiko; Imai, Katsumi; Chong, Pin Fee; Kira, Ryutaro; Amemiya, Mitsuhiro; Saito, Akira; Okamoto, Nobuhiko

    2016-01-01

    Next-generation sequencing (NGS) is widely used for the detection of disease-causing nucleotide variants. The challenges associated with detecting copy number variants (CNVs) using NGS analysis have been reported previously. Disease-related exome panels such as Illumina TruSight One are more cost-effective than whole-exome sequencing (WES) because of their selective target regions (~21% of the WES). In this study, CNVs were analyzed using data extracted through a disease-related exome panel analysis and the eXome Hidden Markov Model (XHMM). Samples from 61 patients with undiagnosed developmental delays and 52 healthy parents were included in this study. In the preliminary study to validate the constructed XHMM system (microarray-first approach), 34 patients who had previously been analyzed by chromosomal microarray testing were used. Among the five CNVs larger than 200 kb that were considered as non-pathogenic CNVs and were used as positive controls, four CNVs was successfully detected. The system was subsequently used to analyze different samples from 27 patients (NGS-first approach); 2 of these patients were successfully diagnosed as having pathogenic CNVs (an unbalanced translocation der(5)t(5;14) and a 16p11.2 duplication). These diagnoses were re-confirmed by chromosomal microarray testing and/or fluorescence in situ hybridization. The NGS-first approach generated no false-negative or false-positive results for pathogenic CNVs, indicating its high sensitivity and specificity in detecting pathogenic CNVs. The results of this study show the possible clinical utility of pathogenic CNV screening using disease-related exome panel analysis and XHMM. PMID:27579173

  11. Using satellite observations to improve model estimates of CO2 and CH4 flux: a Metropolis Hastings Markov Chain Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    MacBean, Natasha; Disney, Mathias; Lewis, Philip; Ineson, Phil

    2010-05-01

    profile as a whole. We present results from an Observing System Simulation Experiment (OSSE) designed to investigate the impact of management and climate change on peatland carbon fluxes, as well as how observations from satellites may be able to constrain modeled carbon fluxes. We use an adapted version of the Carnegie-Ames-Stanford Approach (CASA) model (Potter et al., 1993) that includes a representation of methane dynamics (Potter, 1997). The model formulation is further modified to allow for assimilation of satellite observations of surface soil moisture and land surface temperature. The observations are used to update model estimates using a Metropolis Hastings Markov Chain Monte Carlo (MCMC) approach. We examine the effect of temporal frequency and precision of satellite observations with a view to establishing how, and at what level, such observations would make a significant improvement in model uncertainty. We compare this with the system characteristics of existing and future satellites. We believe this is the first attempt to assimilate surface soil moisture and land surface temperature into an ecosystem model that includes a full representation of CH4 flux. Bubier, J., and T. Moore (1994), An ecological perspective on methane emissions from northern wetlands, TREE, 9, 460-464. Charman, D. (2002), Peatlands and Environmental Change, JohnWiley and Sons, Ltd, England. Gorham, E. (1991), Northern peatlands: Role in the carbon cycle and probable responses to climatic warming, Ecological Applications, 1, 182-195. Lai, D. (2009), Methane dynamics in northern peatlands: A review, Pedosphere, 19, 409-421. Le Mer, J., and P. Roger (2001), Production, oxidation, emission and consumption of methane by soils: A review, European Journal of Soil Biology, 37, 25-50. Limpens, J., F. Berendse, J. Canadell, C. Freeman, J. Holden, N. Roulet, H. Rydin, and Potter, C. (1997), An ecosystem simulation model for methane production and emission from wetlands, Global Biogeochemical

  12. A 2D systems approach to iterative learning control for discrete linear processes with zero Markov parameters

    NASA Astrophysics Data System (ADS)

    Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.

    2011-07-01

    In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.

  13. Maximum-likelihood and markov chain monte carlo approaches to estimate inbreeding and effective size from allele frequency changes.

    PubMed Central

    Laval, Guillaume; SanCristobal, Magali; Chevalet, Claude

    2003-01-01

    Maximum-likelihood and Bayesian (MCMC algorithm) estimates of the increase of the Wright-Malécot inbreeding coefficient, F(t), between two temporally spaced samples, were developed from the Dirichlet approximation of allelic frequency distribution (model MD) and from the admixture of the Dirichlet approximation and the probabilities of fixation and loss of alleles (model MDL). Their accuracy was tested using computer simulations in which F(t) = 10% or less. The maximum-likelihood method based on the model MDL was found to be the best estimate of F(t) provided that initial frequencies are known exactly. When founder frequencies are estimated from a limited set of founder animals, only the estimates based on the model MD can be used for the moment. In this case no method was found to be the best in all situations investigated. The likelihood and Bayesian approaches give better results than the classical F-statistics when markers exhibiting a low polymorphism (such as the SNP markers) are used. Concerning the estimations of the effective population size all the new estimates presented here were found to be better than the F-statistics classically used. PMID:12871924

  14. A Hidden Markov Model Approach for Simultaneously Estimating Local Ancestry and Admixture Time Using Next Generation Sequence Data in Samples of Arbitrary Ploidy

    PubMed Central

    Nielsen, Rasmus

    2017-01-01

    Admixture—the mixing of genomes from divergent populations—is increasingly appreciated as a central process in evolution. To characterize and quantify patterns of admixture across the genome, a number of methods have been developed for local ancestry inference. However, existing approaches have a number of shortcomings. First, all local ancestry inference methods require some prior assumption about the expected ancestry tract lengths. Second, existing methods generally require genotypes, which is not feasible to obtain for many next-generation sequencing projects. Third, many methods assume samples are diploid, however a wide variety of sequencing applications will fail to meet this assumption. To address these issues, we introduce a novel hidden Markov model for estimating local ancestry that models the read pileup data, rather than genotypes, is generalized to arbitrary ploidy, and can estimate the time since admixture during local ancestry inference. We demonstrate that our method can simultaneously estimate the time since admixture and local ancestry with good accuracy, and that it performs well on samples of high ploidy—i.e. 100 or more chromosomes. As this method is very general, we expect it will be useful for local ancestry inference in a wider variety of populations than what previously has been possible. We then applied our method to pooled sequencing data derived from populations of Drosophila melanogaster on an ancestry cline on the east coast of North America. We find that regions of local recombination rates are negatively correlated with the proportion of African ancestry, suggesting that selection against foreign ancestry is the least efficient in low recombination regions. Finally we show that clinal outlier loci are enriched for genes associated with gene regulatory functions, consistent with a role of regulatory evolution in ecological adaptation of admixed D. melanogaster populations. Our results illustrate the potential of local ancestry

  15. Fractional System Identification: An Approach Using Continuous Order-Distributions

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Lorenzo, Carl F.

    1999-01-01

    This paper discusses the identification of fractional- and integer-order systems using the concept of continuous order-distribution. Based on the ability to define systems using continuous order-distributions, it is shown that frequency domain system identification can be performed using least squares techniques after discretizing the order-distribution.

  16. Exploiting mid-range DNA patterns for sequence classification: binary abstraction Markov models

    PubMed Central

    Shepard, Samuel S.; McSweeny, Andrew; Serpen, Gursel; Fedorov, Alexei

    2012-01-01

    Messenger RNA sequences possess specific nucleotide patterns distinguishing them from non-coding genomic sequences. In this study, we explore the utilization of modified Markov models to analyze sequences up to 44 bp, far beyond the 8-bp limit of conventional Markov models, for exon/intron discrimination. In order to analyze nucleotide sequences of this length, their information content is first reduced by conversion into shorter binary patterns via the application of numerous abstraction schemes. After the conversion of genomic sequences to binary strings, homogenous Markov models trained on the binary sequences are used to discriminate between exons and introns. We term this approach the Binary Abstraction Markov Model (BAMM). High-quality abstraction schemes for exon/intron discrimination are selected using optimization algorithms on supercomputers. The best MM classifiers are then combined using support vector machines into a single classifier. With this approach, over 95% classification accuracy is achieved without taking reading frame into account. With further development, the BAMM approach can be applied to sequences lacking the genetic code such as ncRNAs and 5′-untranslated regions. PMID:22344692

  17. Performability analysis using semi-Markov reward processes

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Marie, Raymond A.; Sericola, Bruno; Trivedi, Kishor S.

    1990-01-01

    Beaudry (1978) proposed a simple method of computing the distribution of performability in a Markov reward process. Two extensions of Beaudry's approach are presented. The method is generalized to a semi-Markov reward process by removing the restriction requiring the association of zero reward to absorbing states only. The algorithm proceeds by replacing zero-reward nonabsorbing states by a probabilistic switch; it is therefore related to the elimination of vanishing states from the reachability graph of a generalized stochastic Petri net and to the elimination of fast transient states in a decomposition approach to stiff Markov chains. The use of the approach is illustrated with three applications.

  18. [Decision analysis in radiology using Markov models].

    PubMed

    Golder, W

    2000-01-01

    Markov models (Multistate transition models) are mathematical tools to simulate a cohort of individuals followed over time to assess the prognosis resulting from different strategies. They are applied on the assumption that persons are in one of a finite number of states of health (Markov states). Each condition is given a transition probability as well as an incremental value. Probabilities may be chosen constant or varying over time due to predefined rules. Time horizon is divided into equal increments (Markov cycles). The model calculates quality-adjusted life expectancy employing real-life units and values and summing up the length of time spent in each health state adjusted for objective outcomes and subjective appraisal. This sort of modeling prognosis for a given patient is analogous to utility in common decision trees. Markov models can be evaluated by matrix algebra, probabilistic cohort simulation and Monte Carlo simulation. They have been applied to assess the relative benefits and risks of a limited number of diagnostic and therapeutic procedures in radiology. More interventions should be submitted to Markov analyses in order to elucidate their cost-effectiveness.

  19. Reports on Text Linguistics: Approaches to Word Order.

    ERIC Educational Resources Information Center

    Enkvist, Nils Erik; Kohonen, Viljo

    This volume contains papers presented in connection with a symposium held in 1975 and sponsored by Abo Akademi, for the purpose of discussing ongoing research in word-order studies. Papers include: (1) a prolegomena by N.E. Enkvist; (2) "On the Ordering of Sister Constituents in Swedish," by E. Andersson; (3) "What is New…

  20. Evaluation of Usability Utilizing Markov Models

    ERIC Educational Resources Information Center

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  1. A Matrix Approach for General Higher Order Linear Recurrences

    DTIC Science & Technology

    2011-01-01

    properties of linear recurrences (such as the well-known Fibonacci and Pell sequences ). In [2], Er defined k linear recurring sequences of order at...the nth term of the ith generalized order-k Fibonacci sequence . Communicated by Lee See Keong. Received: March 26, 2009; Revised: August 28, 2009...6], the author gave the generalized order-k Fibonacci and Pell (F-P) sequence as follows: For m ≥ 0, n > 0 and 1 ≤ i ≤ k uin = 2 muin−1 + u i n−2

  2. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  3. General and specific consciousness: a first-order representationalist approach.

    PubMed

    Mehta, Neil; Mashour, George A

    2013-01-01

    It is widely acknowledged that a complete theory of consciousness should explain general consciousness (what makes a state conscious at all) and specific consciousness (what gives a conscious state its particular phenomenal quality). We defend first-order representationalism, which argues that consciousness consists of sensory representations directly available to the subject for action selection, belief formation, planning, etc. We provide a neuroscientific framework for this primarily philosophical theory, according to which neural correlates of general consciousness include prefrontal cortex, posterior parietal cortex, and non-specific thalamic nuclei, while neural correlates of specific consciousness include sensory cortex and specific thalamic nuclei. We suggest that recent data support first-order representationalism over biological theory, higher-order representationalism, recurrent processing theory, information integration theory, and global workspace theory.

  4. Reduced-Order Modeling: New Approaches for Computational Physics

    NASA Technical Reports Server (NTRS)

    Beran, Philip S.; Silva, Walter A.

    2001-01-01

    In this paper, we review the development of new reduced-order modeling techniques and discuss their applicability to various problems in computational physics. Emphasis is given to methods ba'sed on Volterra series representations and the proper orthogonal decomposition. Results are reported for different nonlinear systems to provide clear examples of the construction and use of reduced-order models, particularly in the multi-disciplinary field of computational aeroelasticity. Unsteady aerodynamic and aeroelastic behaviors of two- dimensional and three-dimensional geometries are described. Large increases in computational efficiency are obtained through the use of reduced-order models, thereby justifying the initial computational expense of constructing these models and inotivatim,- their use for multi-disciplinary design analysis.

  5. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  6. An Order Statistics Approach to the Halo Model for Galaxies

    NASA Astrophysics Data System (ADS)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-01-01

    We use the Halo Model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the `central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the Lognormal distribution around this mean, and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering, however, this model predicts no luminosity dependence of large scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically under-predicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the Halo Model for galaxies with more physically motivated galaxy formation models.

  7. A higher-order-statistics-based approach to face detection

    NASA Astrophysics Data System (ADS)

    Li, Chunming; Li, Yushan; Wu, Ruihong; Li, Qiuming; Zhuang, Qingde; Zhang, Zhan

    2005-02-01

    A face detection method based on higher order statistics is proposed in this paper. Firstly, the object model and noise model are established to extract moving object from the background according to the fact that higher order statistics is nonsense to Gaussian noise. Secondly, the improved Sobel operator is used to extract the edge image of moving object. And a projection function is used to detect the face in the edge image. Lastly, PCA(Principle Component Analysis) method is used to do face recognition. The performance of the system is evaluated on the real video sequences. It is shown that the proposed method is simple and robust to the detection of human faces in the video sequences.

  8. Thermal nanostructure: An order parameter multiscale ensemble approach

    NASA Astrophysics Data System (ADS)

    Cheluvaraja, S.; Ortoleva, P.

    2010-02-01

    Deductive all-atom multiscale techniques imply that many nanosystems can be understood in terms of the slow dynamics of order parameters that coevolve with the quasiequilibrium probability density for rapidly fluctuating atomic configurations. The result of this multiscale analysis is a set of stochastic equations for the order parameters whose dynamics is driven by thermal-average forces. We present an efficient algorithm for sampling atomistic configurations in viruses and other supramillion atom nanosystems. This algorithm allows for sampling of a wide range of configurations without creating an excess of high-energy, improbable ones. It is implemented and used to calculate thermal-average forces. These forces are then used to search the free-energy landscape of a nanosystem for deep minima. The methodology is applied to thermal structures of Cowpea chlorotic mottle virus capsid. The method has wide applicability to other nanosystems whose properties are described by the CHARMM or other interatomic force field. Our implementation, denoted SIMNANOWORLD™, achieves calibration-free nanosystem modeling. Essential atomic-scale detail is preserved via a quasiequilibrium probability density while overall character is provided via predicted values of order parameters. Applications from virology to the computer-aided design of nanocapsules for delivery of therapeutic agents and of vaccines for nonenveloped viruses are envisioned.

  9. Markov Modeling with Soft Aggregation for Safety and Decision Analysis

    SciTech Connect

    COOPER,J. ARLIN

    1999-09-01

    The methodology in this report improves on some of the limitations of many conventional safety assessment and decision analysis methods. A top-down mathematical approach is developed for decomposing systems and for expressing imprecise individual metrics as possibilistic or fuzzy numbers. A ''Markov-like'' model is developed that facilitates combining (aggregating) inputs into overall metrics and decision aids, also portraying the inherent uncertainty. A major goal of Markov modeling is to help convey the top-down system perspective. One of the constituent methodologies allows metrics to be weighted according to significance of the attribute and aggregated nonlinearly as to contribution. This aggregation is performed using exponential combination of the metrics, since the accumulating effect of such factors responds less and less to additional factors. This is termed ''soft'' mathematical aggregation. Dependence among the contributing factors is accounted for by incorporating subjective metrics on ''overlap'' of the factors as well as by correspondingly reducing the overall contribution of these combinations to the overall aggregation. Decisions corresponding to the meaningfulness of the results are facilitated in several ways. First, the results are compared to a soft threshold provided by a sigmoid function. Second, information is provided on input ''Importance'' and ''Sensitivity,'' in order to know where to place emphasis on considering new controls that may be necessary. Third, trends in inputs and outputs are tracked in order to obtain significant information% including cyclic information for the decision process. A practical example from the air transportation industry is used to demonstrate application of the methodology. Illustrations are given for developing a structure (along with recommended inputs and weights) for air transportation oversight at three different levels, for developing and using cycle information, for developing Importance and

  10. Alignment-free Transcriptomic and Metatranscriptomic Comparison Using Sequencing Signatures with Variable Length Markov Chains

    PubMed Central

    Liao, Weinan; Ren, Jie; Wang, Kun; Wang, Shun; Zeng, Feng; Wang, Ying; Sun, Fengzhu

    2016-01-01

    The comparison between microbial sequencing data is critical to understand the dynamics of microbial communities. The alignment-based tools analyzing metagenomic datasets require reference sequences and read alignments. The available alignment-free dissimilarity approaches model the background sequences with Fixed Order Markov Chain (FOMC) yielding promising results for the comparison of microbial communities. However, in FOMC, the number of parameters grows exponentially with the increase of the order of Markov Chain (MC). Under a fixed high order of MC, the parameters might not be accurately estimated owing to the limitation of sequencing depth. In our study, we investigate an alternative to FOMC to model background sequences with the data-driven Variable Length Markov Chain (VLMC) in metatranscriptomic data. The VLMC originally designed for long sequences was extended to apply to high-throughput sequencing reads and the strategies to estimate the corresponding parameters were developed. The flexible number of parameters in VLMC avoids estimating the vast number of parameters of high-order MC under limited sequencing depth. Different from the manual selection in FOMC, VLMC determines the MC order adaptively. Several beta diversity measures based on VLMC were applied to compare the bacterial RNA-Seq and metatranscriptomic datasets. Experiments show that VLMC outperforms FOMC to model the background sequences in transcriptomic and metatranscriptomic samples. A software pipeline is available at https://d2vlmc.codeplex.com. PMID:27876823

  11. Inverting OII 83.4 nm dayglow profiles using Markov chain radiative transfer

    NASA Astrophysics Data System (ADS)

    Geddes, George; Douglas, Ewan; Finn, Susanna C.; Cook, Timothy; Chakrabarti, Supriya

    2016-11-01

    Emission profiles of the resonantly scattered OII 83.4 nm triplet can in principle be used to estimate O+ density profiles in the F2 region of the ionosphere. Given the emission source profile, solution of this inverse problem is possible but requires significant computation. The traditional Feautrier solution to the radiative transfer problem requires many iterations to converge, making it time consuming to compute. A Markov chain approach to the problem produces similar results by directly constructing a matrix that maps the source emission rate to an effective emission rate which includes scattering to all orders. The Markov chain approach presented here yields faster results and therefore can be used to perform the O+ density retrieval with higher resolution than would otherwise be possible.

  12. An overview of Markov chain methods for the study of stage-sequential developmental processes.

    PubMed

    Kapland, David

    2008-03-01

    This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model. A special case of the mixture latent Markov model, the so-called mover-stayer model, is used in this study. Unconditional and conditional models are estimated for the manifest Markov model and the latent Markov model, where the conditional models include a measure of poverty status. Issues of model specification, estimation, and testing using the Mplus software environment are briefly discussed, and the Mplus input syntax is provided. The author applies these 4 methods to a single example of stage-sequential development in reading competency in the early school years, using data from the Early Childhood Longitudinal Study--Kindergarten Cohort.

  13. Using Markov state models to study self-assembly

    PubMed Central

    Perkett, Matthew R.; Hagan, Michael F.

    2014-01-01

    Markov state models (MSMs) have been demonstrated to be a powerful method for computationally studying intramolecular processes such as protein folding and macromolecular conformational changes. In this article, we present a new approach to construct MSMs that is applicable to modeling a broad class of multi-molecular assembly reactions. Distinct structures formed during assembly are distinguished by their undirected graphs, which are defined by strong subunit interactions. Spatial inhomogeneities of free subunits are accounted for using a recently developed Gaussian-based signature. Simplifications to this state identification are also investigated. The feasibility of this approach is demonstrated on two different coarse-grained models for virus self-assembly. We find good agreement between the dynamics predicted by the MSMs and long, unbiased simulations, and that the MSMs can reduce overall simulation time by orders of magnitude. PMID:24907984

  14. Cover estimation and payload location using Markov random fields

    NASA Astrophysics Data System (ADS)

    Quach, Tu-Thach

    2014-02-01

    Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.

  15. Stochastic motif extraction using hidden Markov model

    SciTech Connect

    Fujiwara, Yukiko; Asogawa, Minoru; Konagaya, Akihiko

    1994-12-31

    In this paper, we study the application of an HMM (hidden Markov model) to the problem of representing protein sequences by a stochastic motif. A stochastic protein motif represents the small segments of protein sequences that have a certain function or structure. The stochastic motif, represented by an HMM, has conditional probabilities to deal with the stochastic nature of the motif. This HMM directive reflects the characteristics of the motif, such as a protein periodical structure or grouping. In order to obtain the optimal HMM, we developed the {open_quotes}iterative duplication method{close_quotes} for HMM topology learning. It starts from a small fully-connected network and iterates the network generation and parameter optimization until it achieves sufficient discrimination accuracy. Using this method, we obtained an HMM for a leucine zipper motif. Compared to the accuracy of a symbolic pattern representation with accuracy of 14.8 percent, an HMM achieved 79.3 percent in prediction. Additionally, the method can obtain an HMM for various types of zinc finger motifs, and it might separate the mixed data. We demonstrated that this approach is applicable to the validation of the protein databases; a constructed HMM b as indicated that one protein sequence annotated as {open_quotes}lencine-zipper like sequence{close_quotes} in the database is quite different from other leucine-zipper sequences in terms of likelihood, and we found this discrimination is plausible.

  16. Musical Markov Chains

    NASA Astrophysics Data System (ADS)

    Volchenkov, Dima; Dawin, Jean René

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  17. A Latent Markov Modelling Approach to the Evaluation of Circulating Cathodic Antigen Strips for Schistosomiasis Diagnosis Pre- and Post-Praziquantel Treatment in Uganda

    PubMed Central

    Koukounari, Artemis; Donnelly, Christl A.; Moustaki, Irini; Tukahebwa, Edridah M.; Kabatereine, Narcis B.; Wilson, Shona; Webster, Joanne P.; Deelder, André M.; Vennervald, Birgitte J.; van Dam, Govert J.

    2013-01-01

    Regular treatment with praziquantel (PZQ) is the strategy for human schistosomiasis control aiming to prevent morbidity in later life. With the recent resolution on schistosomiasis elimination by the 65th World Health Assembly, appropriate diagnostic tools to inform interventions are keys to their success. We present a discrete Markov chains modelling framework that deals with the longitudinal study design and the measurement error in the diagnostic methods under study. A longitudinal detailed dataset from Uganda, in which one or two doses of PZQ treatment were provided, was analyzed through Latent Markov Models (LMMs). The aim was to evaluate the diagnostic accuracy of Circulating Cathodic Antigen (CCA) and of double Kato-Katz (KK) faecal slides over three consecutive days for Schistosoma mansoni infection simultaneously by age group at baseline and at two follow-up times post treatment. Diagnostic test sensitivities and specificities and the true underlying infection prevalence over time as well as the probabilities of transitions between infected and uninfected states are provided. The estimated transition probability matrices provide parsimonious yet important insights into the re-infection and cure rates in the two age groups. We show that the CCA diagnostic performance remained constant after PZQ treatment and that this test was overall more sensitive but less specific than single-day double KK for the diagnosis of S. mansoni infection. The probability of clearing infection from baseline to 9 weeks was higher among those who received two PZQ doses compared to one PZQ dose for both age groups, with much higher re-infection rates among children compared to adolescents and adults. We recommend LMMs as a useful methodology for monitoring and evaluation and treatment decision research as well as CCA for mapping surveys of S. mansoni infection, although additional diagnostic tools should be incorporated in schistosomiasis elimination programs. PMID:24367250

  18. On Measures Driven by Markov Chains

    NASA Astrophysics Data System (ADS)

    Heurteaux, Yanick; Stos, Andrzej

    2014-12-01

    We study measures on which are driven by a finite Markov chain and which generalize the famous Bernoulli products.We propose a hands-on approach to determine the structure function and to prove that the multifractal formalism is satisfied. Formulas for the dimension of the measures and for the Hausdorff dimension of their supports are also provided. Finally, we identify the measures with maximal dimension.

  19. Renormalization group calculations for wetting transitions of infinite order and continuously varying order: local interface Hamiltonian approach.

    PubMed

    Indekeu, J O; Koga, K; Hooyberghs, H; Parry, A O

    2013-08-01

    We study the effect of thermal fluctuations on the wetting phase transitions of infinite order and of continuously varying order, recently discovered within a mean-field density-functional model for three-phase equilibria in systems with short-range forces and a two-component order parameter. Using linear functional renormalization group calculations within a local interface Hamiltonian approach, we show that the infinite-order transitions are robust. The exponential singularity (implying 2-α(s)=∞) of the surface free energy excess at infinite-order wetting as well as the precise algebraic divergence (with β(s)=-1) of the wetting layer thickness are not modified as long as ω<2, with ω the dimensionless wetting parameter that measures the strength of thermal fluctuations. The interface width diverges algebraically and universally (with ν([perpendicular])=1/2). In contrast, the nonuniversal critical wetting transitions of finite but continuously varying order are modified when thermal fluctuations are taken into account, in line with predictions from earlier calculations on similar models displaying weak, intermediate, and strong fluctuation regimes.

  20. A non-homogeneous Markov model for phased-mission reliability analysis

    NASA Technical Reports Server (NTRS)

    Smotherman, Mark; Zemoudeh, Kay

    1989-01-01

    Three assumptions of Markov modeling for reliability of phased-mission systems that limit flexibility of representation are identified. The proposed generalization has the ability to represent state-dependent behavior, handle phases of random duration using globally time-dependent distributions of phase change time, and model globally time-dependent failure and repair rates. The approach is based on a single nonhomogeneous Markov model in which the concept of state transition is extended to include globally time-dependent phase changes. Phase change times are specified using nonoverlapping distributions with probability distribution functions that are zero outside assigned time intervals; the time intervals are ordered according to the phases. A comparison between a numerical solution of the model and simulation demonstrates that the numerical solution can be several times faster than simulation.

  1. The spatiotemporal master equation: Approximation of reaction-diffusion dynamics via Markov state modeling

    NASA Astrophysics Data System (ADS)

    Winkelmann, Stefanie; Schütte, Christof

    2016-12-01

    Accurate modeling and numerical simulation of reaction kinetics is a topic of steady interest. We consider the spatiotemporal chemical master equation (ST-CME) as a model for stochastic reaction-diffusion systems that exhibit properties of metastability. The space of motion is decomposed into metastable compartments, and diffusive motion is approximated by jumps between these compartments. Treating these jumps as first-order reactions, simulation of the resulting stochastic system is possible by the Gillespie method. We present the theory of Markov state models as a theoretical foundation of this intuitive approach. By means of Markov state modeling, both the number and shape of compartments and the transition rates between them can be determined. We consider the ST-CME for two reaction-diffusion systems and compare it to more detailed models. Moreover, a rigorous formal justification of the ST-CME by Galerkin projection methods is presented.

  2. A path-independent method for barrier option pricing in hidden Markov models

    NASA Astrophysics Data System (ADS)

    Rashidi Ranjbar, Hedieh; Seifi, Abbas

    2015-12-01

    This paper presents a method for barrier option pricing under a Black-Scholes model with Markov switching. We extend the option pricing method of Buffington and Elliott to price continuously monitored barrier options under a Black-Scholes model with regime switching. We use a regime switching random Esscher transform in order to determine an equivalent martingale pricing measure, and then solve the resulting multidimensional integral for pricing barrier options. We have calculated prices for down-and-out call options under a two-state hidden Markov model using two different Monte-Carlo simulation approaches and the proposed method. A comparison of the results shows that our method is faster than Monte-Carlo simulation methods.

  3. Estimating Neuronal Ageing with Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Wang, Bing; Pham, Tuan D.

    2011-06-01

    Neuronal degeneration is widely observed in normal ageing, meanwhile the neurode-generative disease like Alzheimer's disease effects neuronal degeneration in a faster way which is considered as faster ageing. Early intervention of such disease could benefit subjects with potentials of positive clinical outcome, therefore, early detection of disease related brain structural alteration is required. In this paper, we propose a computational approach for modelling the MRI-based structure alteration with ageing using hidden Markov model. The proposed hidden Markov model based brain structural model encodes intracortical tissue/fluid distribution using discrete wavelet transformation and vector quantization. Further, it captures gray matter volume loss, which is capable of reflecting subtle intracortical changes with ageing. Experiments were carried out on healthy subjects to validate its accuracy and robustness. Results have shown its ability of predicting the brain age with prediction error of 1.98 years without training data, which shows better result than other age predition methods.

  4. Random frog: an efficient reversible jump Markov Chain Monte Carlo-like approach for variable selection with applications to gene selection and disease classification.

    PubMed

    Li, Hong-Dong; Xu, Qing-Song; Liang, Yi-Zeng

    2012-08-31

    The identification of disease-relevant genes represents a challenge in microarray-based disease diagnosis where the sample size is often limited. Among established methods, reversible jump Markov Chain Monte Carlo (RJMCMC) methods have proven to be quite promising for variable selection. However, the design and application of an RJMCMC algorithm requires, for example, special criteria for prior distributions. Also, the simulation from joint posterior distributions of models is computationally extensive, and may even be mathematically intractable. These disadvantages may limit the applications of RJMCMC algorithms. Therefore, the development of algorithms that possess the advantages of RJMCMC methods and are also efficient and easy to follow for selecting disease-associated genes is required. Here we report a RJMCMC-like method, called random frog that possesses the advantages of RJMCMC methods and is much easier to implement. Using the colon and the estrogen gene expression datasets, we show that random frog is effective in identifying discriminating genes. The top 2 ranked genes for colon and estrogen are Z50753, U00968, and Y10871_at, Z22536_at, respectively. (The source codes with GNU General Public License Version 2.0 are freely available to non-commercial users at: http://code.google.com/p/randomfrog/.).

  5. Constructing Dynamic Event Trees from Markov Models

    SciTech Connect

    Paolo Bucci; Jason Kirschenbaum; Tunc Aldemir; Curtis Smith; Ted Wood

    2006-05-01

    In the probabilistic risk assessment (PRA) of process plants, Markov models can be used to model accurately the complex dynamic interactions between plant physical process variables (e.g., temperature, pressure, etc.) and the instrumentation and control system that monitors and manages the process. One limitation of this approach that has prevented its use in nuclear power plant PRAs is the difficulty of integrating the results of a Markov analysis into an existing PRA. In this paper, we explore a new approach to the generation of failure scenarios and their compilation into dynamic event trees from a Markov model of the system. These event trees can be integrated into an existing PRA using software tools such as SAPHIRE. To implement our approach, we first construct a discrete-time Markov chain modeling the system of interest by: a) partitioning the process variable state space into magnitude intervals (cells), b) using analytical equations or a system simulator to determine the transition probabilities between the cells through the cell-to-cell mapping technique, and, c) using given failure/repair data for all the components of interest. The Markov transition matrix thus generated can be thought of as a process model describing the stochastic dynamic behavior of the finite-state system. We can therefore search the state space starting from a set of initial states to explore all possible paths to failure (scenarios) with associated probabilities. We can also construct event trees of arbitrary depth by tracing paths from a chosen initiating event and recording the following events while keeping track of the probabilities associated with each branch in the tree. As an example of our approach, we use the simple level control system often used as benchmark in the literature with one process variable (liquid level in a tank), and three control units: a drain unit and two supply units. Each unit includes a separate level sensor to observe the liquid level in the tank

  6. A Markov Model for Assessing the Reliability of a Digital Feedwater Control System

    SciTech Connect

    Chu,T.L.; Yue, M.; Martinez-Guridi, G.; Lehner, J.

    2009-02-11

    A Markov approach has been selected to represent and quantify the reliability model of a digital feedwater control system (DFWCS). The system state, i.e., whether a system fails or not, is determined by the status of the components that can be characterized by component failure modes. Starting from the system state that has no component failure, possible transitions out of it are all failure modes of all components in the system. Each additional component failure mode will formulate a different system state that may or may not be a system failure state. The Markov transition diagram is developed by strictly following the sequences of component failures (i.e., failure sequences) because the different orders of the same set of failures may affect the system in completely different ways. The formulation and quantification of the Markov model, together with the proposed FMEA (Failure Modes and Effects Analysis) approach, and the development of the supporting automated FMEA tool are considered the three major elements of a generic conceptual framework under which the reliability of digital systems can be assessed.

  7. Markov models and the ensemble Kalman filter for estimation of sorption rates.

    SciTech Connect

    Vugrin, Eric D.; McKenna, Sean Andrew; Vugrin, Kay White

    2007-09-01

    Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.

  8. Long-range memory and non-Markov statistical effects in human sensorimotor coordination

    NASA Astrophysics Data System (ADS)

    M. Yulmetyev, Renat; Emelyanova, Natalya; Hänggi, Peter; Gafarov, Fail; Prokhorov, Alexander

    2002-12-01

    In this paper, the non-Markov statistical processes and long-range memory effects in human sensorimotor coordination are investigated. The theoretical basis of this study is the statistical theory of non-stationary discrete non-Markov processes in complex systems (Phys. Rev. E 62, 6178 (2000)). The human sensorimotor coordination was experimentally studied by means of standard dynamical tapping test on the group of 32 young peoples with tap numbers up to 400. This test was carried out separately for the right and the left hand according to the degree of domination of each brain hemisphere. The numerical analysis of the experimental results was made with the help of power spectra of the initial time correlation function, the memory functions of low orders and the first three points of the statistical spectrum of non-Markovity parameter. Our observations demonstrate, that with the regard to results of the standard dynamic tapping-test it is possible to divide all examinees into five different dynamic types. We have introduced the conflict coefficient to estimate quantitatively the order-disorder effects underlying life systems. The last one reflects the existence of disbalance between the nervous and the motor human coordination. The suggested classification of the neurophysiological activity represents the dynamic generalization of the well-known neuropsychological types and provides the new approach in a modern neuropsychology.

  9. A reduced-rank approach for implementing higher-order Volterra filters

    NASA Astrophysics Data System (ADS)

    O. Batista, Eduardo L.; Seara, Rui

    2016-12-01

    The use of Volterra filters in practical applications is often limited by their high computational burden. To cope with this problem, many strategies for implementing Volterra filters with reduced complexity have been proposed in the open literature. Some of these strategies are based on reduced-rank approaches obtained by defining a matrix of filter coefficients and applying the singular value decomposition to such a matrix. Then, discarding the smaller singular values, effective reduced-complexity Volterra implementations can be obtained. The application of this type of approach to higher-order Volterra filters (considering orders greater than 2) is however not straightforward, which is especially due to some difficulties encountered in the definition of higher-order coefficient matrices. In this context, the present paper is devoted to the development of a novel reduced-rank approach for implementing higher-order Volterra filters. Such an approach is based on a new form of Volterra kernel implementation that allows decomposing higher-order kernels into structures composed only of second-order kernels. Then, applying the singular value decomposition to the coefficient matrices of these second-order kernels, effective implementations for higher-order Volterra filters can be obtained. Simulation results are presented aiming to assess the effectiveness of the proposed approach.

  10. Markov chain Monte Carlo without likelihoods.

    PubMed

    Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon

    2003-12-23

    Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.

  11. A reward semi-Markov process with memory for wind speed modeling

    NASA Astrophysics Data System (ADS)

    Petroni, F.; D'Amico, G.; Prattico, F.

    2012-04-01

    Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.

  12. Metrics for Labeled Markov Systems

    NASA Technical Reports Server (NTRS)

    Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash

    1999-01-01

    Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.

  13. Semi-Markov Graph Dynamics

    PubMed Central

    Raberto, Marco; Rapallo, Fabio; Scalas, Enrico

    2011-01-01

    In this paper, we outline a model of graph (or network) dynamics based on two ingredients. The first ingredient is a Markov chain on the space of possible graphs. The second ingredient is a semi-Markov counting process of renewal type. The model consists in subordinating the Markov chain to the semi-Markov counting process. In simple words, this means that the chain transitions occur at random time instants called epochs. The model is quite rich and its possible connections with algebraic geometry are briefly discussed. Moreover, for the sake of simplicity, we focus on the space of undirected graphs with a fixed number of nodes. However, in an example, we present an interbank market model where it is meaningful to use directed graphs or even weighted graphs. PMID:21887245

  14. Hidden Markov Model Analysis of Multichromophore Photobleaching

    PubMed Central

    Messina, Troy C.; Kim, Hiyun; Giurleo, Jason T.; Talaga, David S.

    2007-01-01

    The interpretation of single-molecule measurements is greatly complicated by the presence of multiple fluorescent labels. However, many molecular systems of interest consist of multiple interacting components. We investigate this issue using multiply labeled dextran polymers that we intentionally photobleach to the background on a single-molecule basis. Hidden Markov models allow for unsupervised analysis of the data to determine the number of fluorescent subunits involved in the fluorescence intermittency of the 6-carboxy-tetramethylrhodamine labels by counting the discrete steps in fluorescence intensity. The Bayes information criterion allows us to distinguish between hidden Markov models that differ by the number of states, that is, the number of fluorescent molecules. We determine information-theoretical limits and show via Monte Carlo simulations that the hidden Markov model analysis approaches these theoretical limits. This technique has resolving power of one fluorescing unit up to as many as 30 fluorescent dyes with the appropriate choice of dye and adequate detection capability. We discuss the general utility of this method for determining aggregation-state distributions as could appear in many biologically important systems and its adaptability to general photometric experiments. PMID:16913765

  15. Phase transitions in Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Bechhoefer, John; Lathouwers, Emma

    In Hidden Markov Models (HMMs), a Markov process is not directly accessible. In the simplest case, a two-state Markov model ``emits'' one of two ``symbols'' at each time step. We can think of these symbols as noisy measurements of the underlying state. With some probability, the symbol implies that the system is in one state when it is actually in the other. The ability to judge which state the system is in sets the efficiency of a Maxwell demon that observes state fluctuations in order to extract heat from a coupled reservoir. The state-inference problem is to infer the underlying state from such noisy measurements at each time step. We show that there can be a phase transition in such measurements: for measurement error rates below a certain threshold, the inferred state always matches the observation. For higher error rates, there can be continuous or discontinuous transitions to situations where keeping a memory of past observations improves the state estimate. We can partly understand this behavior by mapping the HMM onto a 1d random-field Ising model at zero temperature. We also present more recent work that explores a larger parameter space and more states. Research funded by NSERC, Canada.

  16. On Markov modelling of near-wall turbulent shear flow

    NASA Astrophysics Data System (ADS)

    Reynolds, A. M.

    1999-11-01

    The role of Reynolds number in determining particle trajectories in near-wall turbulent shear flow is investigated in numerical simulations using a second-order Lagrangian stochastic (LS) model (Reynolds, A.M. 1999: A second-order Lagrangian stochastic model for particle trajectories in inhomogeneous turbulence. Quart. J. Roy. Meteorol. Soc. (In Press)). In such models, it is the acceleration, velocity and position of a particle rather than just its velocity and position which are assumed to evolve jointly as a continuous Markov process. It is found that Reynolds number effects are significant in determining simulated particle trajectories in the viscous sub-layer and the buffer zone. These effects are due almost entirely to the change in the Lagrangian integral timescale and are shown to be well represented in a first-order LS model by Sawford's correction footnote Sawford, B.L. 1991: Reynolds number effects in Lagrangian stochastic models of turbulent dispersion. Phys Fluids, 3, 1577-1586). This is found to remain true even when the Taylor-Reynolds number R_λ ~ O(0.1). This is somewhat surprising because the assumption of a Markovian evolution for velocity and position is strictly applicable only in the large Reynolds number limit because then the Lagrangian acceleration autocorrelation function approaches a delta function at the origin, corresponding to an uncorrelated component in the acceleration, and hence a Markov process footnote Borgas, M.S. and Sawford, B.L. 1991: The small-scale structure of acceleration correlations and its role in the statistical theory of turbulent dispersion. J. Fluid Mech. 288, 295-320.

  17. Bayesian restoration of ion channel records using hidden Markov models.

    PubMed

    Rosales, R; Stark, J A; Fitzgerald, W J; Hladky, S B

    2001-03-01

    Hidden Markov models have been used to restore recorded signals of single ion channels buried in background noise. Parameter estimation and signal restoration are usually carried out through likelihood maximization by using variants of the Baum-Welch forward-backward procedures. This paper presents an alternative approach for dealing with this inferential task. The inferences are made by using a combination of the framework provided by Bayesian statistics and numerical methods based on Markov chain Monte Carlo stochastic simulation. The reliability of this approach is tested by using synthetic signals of known characteristics. The expectations of the model parameters estimated here are close to those calculated using the Baum-Welch algorithm, but the present methods also yield estimates of their errors. Comparisons of the results of the Bayesian Markov Chain Monte Carlo approach with those obtained by filtering and thresholding demonstrate clearly the superiority of the new methods.

  18. Teaching Higher Order Thinking in the Introductory MIS Course: A Model-Directed Approach

    ERIC Educational Resources Information Center

    Wang, Shouhong; Wang, Hai

    2011-01-01

    One vision of education evolution is to change the modes of thinking of students. Critical thinking, design thinking, and system thinking are higher order thinking paradigms that are specifically pertinent to business education. A model-directed approach to teaching and learning higher order thinking is proposed. An example of application of the…

  19. Higher-order terms in sensitivity analysis through a differential approach

    SciTech Connect

    Dubi, A.; Dudziak, D.J.

    1981-06-01

    A differential approach to sensitivity analysis has been developed that eliminates some difficulties existing in previous work. The new development leads to simple explicit expressions for the first-order perturbation as well as any higher-order terms. The higher-order terms are dependent only on differentials of the transport operator, the unperturbed flux, the adjoint flux, and the unperturbed Green's function of the system.

  20. Mori-Zwanzig theory for dissipative forces in coarse-grained dynamics in the Markov limit

    NASA Astrophysics Data System (ADS)

    Izvekov, Sergei

    2017-01-01

    We derive alternative Markov approximations for the projected (stochastic) force and memory function in the coarse-grained (CG) generalized Langevin equation, which describes the time evolution of the center-of-mass coordinates of clusters of particles in the microscopic ensemble. This is done with the aid of the Mori-Zwanzig projection operator method based on the recently introduced projection operator [S. Izvekov, J. Chem. Phys. 138, 134106 (2013), 10.1063/1.4795091]. The derivation exploits the "generalized additive fluctuating force" representation to which the projected force reduces in the adopted projection operator formalism. For the projected force, we present a first-order time expansion which correctly extends the static fluctuating force ansatz with the terms necessary to maintain the required orthogonality of the projected dynamics in the Markov limit to the space of CG phase variables. The approximant of the memory function correctly accounts for the momentum dependence in the lowest (second) order and indicates that such a dependence may be important in the CG dynamics approaching the Markov limit. In the case of CG dynamics with a weak dependence of the memory effects on the particle momenta, the expression for the memory function presented in this work is applicable to non-Markov systems. The approximations are formulated in a propagator-free form allowing their efficient evaluation from the microscopic data sampled by standard molecular dynamics simulations. A numerical application is presented for a molecular liquid (nitromethane). With our formalism we do not observe the "plateau-value problem" if the friction tensors for dissipative particle dynamics (DPD) are computed using the Green-Kubo relation. Our formalism provides a consistent bottom-up route for hierarchical parametrization of DPD models from atomistic simulations.

  1. Resilient model approximation for Markov jump time-delay systems via reduced model with hierarchical Markov chains

    NASA Astrophysics Data System (ADS)

    Zhu, Yanzheng; Zhang, Lixian; Sreeram, Victor; Shammakh, Wafa; Ahmad, Bashir

    2016-10-01

    In this paper, the resilient model approximation problem for a class of discrete-time Markov jump time-delay systems with input sector-bounded nonlinearities is investigated. A linearised reduced-order model is determined with mode changes subject to domination by a hierarchical Markov chain containing two different nonhomogeneous Markov chains. Hence, the reduced-order model obtained not only reflects the dependence of the original systems but also model external influence that is related to the mode changes of the original system. Sufficient conditions formulated in terms of bilinear matrix inequalities for the existence of such models are established, such that the resulting error system is stochastically stable and has a guaranteed l2-l∞ error performance. A linear matrix inequalities optimisation coupled with line search is exploited to solve for the corresponding reduced-order systems. The potential and effectiveness of the developed theoretical results are demonstrated via a numerical example.

  2. Monte Carlo Simulation of Markov, Semi-Markov, and Generalized Semi- Markov Processes in Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    English, Thomas

    2005-01-01

    A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.

  3. Iterative approach for zero-order term elimination in off-axis multiplex digital holography

    NASA Astrophysics Data System (ADS)

    Zhao, Dongliang; Xie, Dongzhuo; Yang, Yong; Zhai, Hongchen

    2017-01-01

    An iterative approach is proposed to eliminate the zero-order term from an off-axis multiplexed hologram that contains several sub-holograms. The zero-order components of each sub-hologram are effectively eliminated one by one using the proposed iterative procedure. Because of the reduction of the zero-order components in the frequency domain, enlarged filtering windows can be used to separate each of the +1 order components and improve the signal-to-noise ratio. The proposed method does not require prior knowledge of the object images, and only needs each of the reference wave intensities, which can be acquired before acquisition of the multiplexed hologram. The feasibility of the proposed approach is confirmed through mathematical deductions and numerical simulations, and the robustness of the proposed approach is verified using a practical multiplexed hologram.

  4. Data-driven approach to identify field-scale biogeochemical transitions using geochemical and geophysical data and hidden Markov models: Development and application at a uranium-contaminated aquifer

    NASA Astrophysics Data System (ADS)

    Chen, Jinsong; Hubbard, Susan S.; Williams, Kenneth H.

    2013-10-01

    Although mechanistic reaction networks have been developed to quantify the biogeochemical evolution of subsurface systems associated with bioremediation, it is difficult in practice to quantify the onset and distribution of these transitions at the field scale using commonly collected wellbore datasets. As an alternative approach to the mechanistic methods, we develop a data-driven, statistical model to identify biogeochemical transitions using various time-lapse aqueous geochemical data (e.g., Fe(II), sulfate, sulfide, acetate, and uranium concentrations) and induced polarization (IP) data. We assume that the biogeochemical transitions can be classified as several dominant states that correspond to redox transitions and test the method at a uranium-contaminated site. The relationships between the geophysical observations and geochemical time series vary depending upon the unknown underlying redox status, which is modeled as a hidden Markov random field. We estimate unknown parameters by maximizing the joint likelihood function using the maximization-expectation algorithm. The case study results show that when considered together aqueous geochemical data and IP imaginary conductivity provide a key diagnostic signature of biogeochemical stages. The developed method provides useful information for evaluating the effectiveness of bioremediation, such as the probability of being in specific redox stages following biostimulation where desirable pathways (e.g., uranium removal) are more highly favored. The use of geophysical data in the approach advances the possibility of using noninvasive methods to monitor critical biogeochemical system stages and transitions remotely and over field relevant scales (e.g., from square meters to several hectares).

  5. Generator estimation of Markov jump processes

    NASA Astrophysics Data System (ADS)

    Metzner, P.; Dittmer, E.; Jahnke, T.; Schütte, Ch.

    2007-11-01

    Estimating the generator of a continuous-time Markov jump process based on incomplete data is a problem which arises in various applications ranging from machine learning to molecular dynamics. Several methods have been devised for this purpose: a quadratic programming approach (cf. [D.T. Crommelin, E. Vanden-Eijnden, Fitting timeseries by continuous-time Markov chains: a quadratic programming approach, J. Comp. Phys. 217 (2006) 782-805]), a resolvent method (cf. [T. Müller, Modellierung von Proteinevolution, PhD thesis, Heidelberg, 2001]), and various implementations of an expectation-maximization algorithm ([S. Asmussen, O. Nerman, M. Olsson, Fitting phase-type distributions via the EM algorithm, Scand. J. Stat. 23 (1996) 419-441; I. Holmes, G.M. Rubin, An expectation maximization algorithm for training hidden substitution models, J. Mol. Biol. 317 (2002) 753-764; U. Nodelman, C.R. Shelton, D. Koller, Expectation maximization and complex duration distributions for continuous time Bayesian networks, in: Proceedings of the twenty-first conference on uncertainty in AI (UAI), 2005, pp. 421-430; M. Bladt, M. Sørensen, Statistical inference for discretely observed Markov jump processes, J.R. Statist. Soc. B 67 (2005) 395-410]). Some of these methods, however, seem to be known only in a particular research community, and have later been reinvented in a different context. The purpose of this paper is to compile a catalogue of existing approaches, to compare the strengths and weaknesses, and to test their performance in a series of numerical examples. These examples include carefully chosen model problems and an application to a time series from molecular dynamics.

  6. A Markov Chain Monte Carlo Algorithm for Infrasound Atmospheric Sounding: Application to the Humming Roadrunner experiment in New Mexico

    NASA Astrophysics Data System (ADS)

    Lalande, Jean-Marie; Waxler, Roger; Velea, Doru

    2016-04-01

    As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.

  7. On Markov parameters in system identification

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Longman, Richard W.

    1991-01-01

    A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.

  8. Alignment of multiple proteins with an ensemble of Hidden Markov Models

    PubMed Central

    Song, Yinglei; Qu, Junfeng; Hura, Gurdeep S.

    2011-01-01

    In this paper, we developed a new method that progressively construct and update a set of alignments by adding sequences in certain order to each of the existing alignments. Each of the existing alignments is modelled with a profile Hidden Markov Model (HMM) and an added sequence is aligned to each of these profile HMMs. We introduced an integer parameter for the number of profile HMMs. The profile HMMs are then updated based on the alignments with leading scores. Our experiments on BaliBASE showed that our approach could efficiently explore the alignment space and significantly improve the alignment accuracy. PMID:20376922

  9. Markov chain Monte Carlo inference for Markov jump processes via the linear noise approximation.

    PubMed

    Stathopoulos, Vassilios; Girolami, Mark A

    2013-02-13

    Bayesian analysis for Markov jump processes (MJPs) is a non-trivial and challenging problem. Although exact inference is theoretically possible, it is computationally demanding, thus its applicability is limited to a small class of problems. In this paper, we describe the application of Riemann manifold Markov chain Monte Carlo (MCMC) methods using an approximation to the likelihood of the MJP that is valid when the system modelled is near its thermodynamic limit. The proposed approach is both statistically and computationally efficient whereas the convergence rate and mixing of the chains allow for fast MCMC inference. The methodology is evaluated using numerical simulations on two problems from chemical kinetics and one from systems biology.

  10. The Amritsar Massacre: The Origins of the British Approach of Minimal Force on Public Order Operations

    DTIC Science & Technology

    2009-10-04

    No5, 1932), 86. 116 Sir John Dill, Notes on the Tactical Lessons of the Palestine Rebellion. (London, 1936), 21 . 44 experience of internal ...nothing new; it has been a requirement of the military to support the national government in the quelling of internal public disorder since the...military approach, the application of minimal force, will constantly be under pressure. The British approach to public order and internal security

  11. Likelihood free inference for Markov processes: a comparison.

    PubMed

    Owen, Jamie; Wilkinson, Darren J; Gillespie, Colin S

    2015-04-01

    Approaches to Bayesian inference for problems with intractable likelihoods have become increasingly important in recent years. Approximate Bayesian computation (ABC) and "likelihood free" Markov chain Monte Carlo techniques are popular methods for tackling inference in these scenarios but such techniques are computationally expensive. In this paper we compare the two approaches to inference, with a particular focus on parameter inference for stochastic kinetic models, widely used in systems biology. Discrete time transition kernels for models of this type are intractable for all but the most trivial systems yet forward simulation is usually straightforward. We discuss the relative merits and drawbacks of each approach whilst considering the computational cost implications and efficiency of these techniques. In order to explore the properties of each approach we examine a range of observation regimes using two example models. We use a Lotka-Volterra predator-prey model to explore the impact of full or partial species observations using various time course observations under the assumption of known and unknown measurement error. Further investigation into the impact of observation error is then made using a Schlögl system, a test case which exhibits bi-modal state stability in some regions of parameter space.

  12. Tracking Human Pose Using Max-Margin Markov Models.

    PubMed

    Zhao, Lin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong

    2015-12-01

    We present a new method for tracking human pose by employing max-margin Markov models. Representing a human body by part-based models, such as pictorial structure, the problem of pose tracking can be modeled by a discrete Markov random field. Considering max-margin Markov networks provide an efficient way to deal with both structured data and strong generalization guarantees, it is thus natural to learn the model parameters using the max-margin technique. Since tracking human pose needs to couple limbs in adjacent frames, the model will introduce loops and will be intractable for learning and inference. Previous work has resorted to pose estimation methods, which discard temporal information by parsing frames individually. Alternatively, approximate inference strategies have been used, which can overfit to statistics of a particular data set. Thus, the performance and generalization of these methods are limited. In this paper, we approximate the full model by introducing an ensemble of two tree-structured sub-models, Markov networks for spatial parsing and Markov chains for temporal parsing. Both models can be trained jointly using the max-margin technique, and an iterative parsing process is proposed to achieve the ensemble inference. We apply our model on three challengeable data sets, which contains highly varied and articulated poses. Comprehensive experimental results demonstrate the superior performance of our method over the state-of-the-art approaches.

  13. a Markov-Process Inspired CA Model of Highway Traffic

    NASA Astrophysics Data System (ADS)

    Wang, Fa; Li, Li; Hu, Jian-Ming; Ji, Yan; Ma, Rui; Jiang, Rui

    To provide a more accurate description of the driving behaviors especially in car-following, namely a Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed gap distribution. This new model provides a microscopic simulation explanation for the governing interaction forces (potentials) between the queuing vehicles, which cannot be directly measurable for traffic flow applications. The agreement between empirical observations and simulation results suggests the soundness of this new approach.

  14. A semi-Markov model with memory for price changes

    NASA Astrophysics Data System (ADS)

    D'Amico, Guglielmo; Petroni, Filippo

    2011-12-01

    We study the high-frequency price dynamics of traded stocks by means of a model of returns using a semi-Markov approach. More precisely we assume that the intraday returns are described by a discrete time homogeneous semi-Markov model which depends also on a memory index. The index is introduced to take into account periods of high and low volatility in the market. First of all we derive the equations governing the process and then theoretical results are compared with empirical findings from real data. In particular we analyzed high-frequency data from the Italian stock market from 1 January 2007 until the end of December 2010.

  15. Efficient maximum likelihood parameterization of continuous-time Markov processes

    PubMed Central

    McGibbon, Robert T.; Pande, Vijay S.

    2015-01-01

    Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016

  16. Developing Higher Order Reading Comprehension Skills in the Learning Disabled Student: A Non-Basal Approach.

    ERIC Educational Resources Information Center

    Solomon, Sheila

    This practicum study evaluated a non-basal, multidisciplinary, multisensory approach to teaching higher order reading comprehension skills to eight fifth-grade learning-disabled students from low socioeconomic minority group backgrounds. The four comprehension skills were: (1) identifying the main idea; (2) determining cause and effect; (3) making…

  17. [Towards a clinical approach in institutions in order to enable dreams].

    PubMed

    Ponroy, Annabelle

    2013-01-01

    Care protocols and their proliferation tend to dampen the enthusiasm of professionals in their daily practice. An institution's clinical approach must be designed in terms of admission in order notto leave madness on the threshold of care. Trusting the enthusiasm and desire of nurses means favouring creativity within practices.

  18. Contemplative Practices and Orders of Consciousness: A Constructive-Developmental Approach

    ERIC Educational Resources Information Center

    Silverstein, Charles H.

    2012-01-01

    This qualitative study explores the correspondence between contemplative practices and "orders of consciousness" from a constructive-developmental perspective, using Robert Kegan's approach. Adult developmental growth is becoming an increasingly important influence on humanity's ability to deal effectively with the growing complexity of…

  19. Quantum hidden Markov models based on transition operation matrices

    NASA Astrophysics Data System (ADS)

    Cholewa, Michał; Gawron, Piotr; Głomb, Przemysław; Kurzyk, Dariusz

    2017-04-01

    In this work, we extend the idea of quantum Markov chains (Gudder in J Math Phys 49(7):072105 [3]) in order to propose quantum hidden Markov models (QHMMs). For that, we use the notions of transition operation matrices and vector states, which are an extension of classical stochastic matrices and probability distributions. Our main result is the Mealy QHMM formulation and proofs of algorithms needed for application of this model: Forward for general case and Vitterbi for a restricted class of QHMMs. We show the relations of the proposed model to other quantum HMM propositions and present an example of application.

  20. Students' Progress throughout Examination Process as a Markov Chain

    ERIC Educational Resources Information Center

    Hlavatý, Robert; Dömeová, Ludmila

    2014-01-01

    The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…

  1. Influence of credit scoring on the dynamics of Markov chain

    NASA Astrophysics Data System (ADS)

    Galina, Timofeeva

    2015-11-01

    Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.

  2. A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2009-01-01

    We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.

  3. On a Result for Finite Markov Chains

    ERIC Educational Resources Information Center

    Kulathinal, Sangita; Ghosh, Lagnojita

    2006-01-01

    In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…

  4. The Analysis of Rush Orders Risk in Supply Chain: A Simulation Approach

    NASA Technical Reports Server (NTRS)

    Mahfouz, Amr; Arisha, Amr

    2011-01-01

    Satisfying customers by delivering demands at agreed time, with competitive prices, and in satisfactory quality level are crucial requirements for supply chain survival. Incidence of risks in supply chain often causes sudden disruptions in the processes and consequently leads to customers losing their trust in a company's competence. Rush orders are considered to be one of the main types of supply chain risks due to their negative impact on the overall performance, Using integrated definition modeling approaches (i.e. IDEF0 & IDEF3) and simulation modeling technique, a comprehensive integrated model has been developed to assess rush order risks and examine two risk mitigation strategies. Detailed functions sequence and objects flow were conceptually modeled to reflect on macro and micro levels of the studied supply chain. Discrete event simulation models were then developed to assess and investigate the mitigation strategies of rush order risks, the objective of this is to minimize order cycle time and cost.

  5. A second order kinetic approach for modeling solute retention and transport in soils

    NASA Astrophysics Data System (ADS)

    Selim, H. M.; Amacher, M. C.

    1988-12-01

    We present a second-order kinetic approach for the description of solute retention during transport in soils. The basis for this approach is that it accounts for the sites on the soil matrix which are accessible for retention of the reactive solutes in solution. This approach was incorporated with the fully kinetic two-site model where the difference between the characteristics of the two types of sites is based on the rate of kinetic retention reactions. We also assume that the retention mechanisms are site-specific, e.g., the sorbed phase on type 1 sites may be characteristically different in their energy of reaction and/or the solute species from that on type 2 sites. The second-order two-site (SOTS) model was capable of describing the kinetic retention behavior of Cr(VI) batch data for Olivier, Windsor, and Cecil soils. Using independently measured parameters, the SOTS model was successful in predicting experimental Cr breakthrough curves (BTC's). The proposed second-order approach was also extended to the diffusion controlled mobile-immobile or two-region (SOMIM) model. The use of estimated parameters (e.g., the mobile water fraction and mass transfer coefficients) for the SOMIM model did not provide improved predictions of Cr BTC's in comparison to the SOTS model. The failure of the mobile-immobile model was attributed to the lack of nonequilibrium conditions for the two regions in these soils.

  6. An analytical approach to estimating the first order scatter in heterogeneous medium. II. A practical application.

    PubMed

    Yao, Weiguang; Leszczynski, Konrad W

    2009-07-01

    Recently, the authors proposed an analytical scheme to estimate the first order x-ray scatter by approximating the Klein-Nishina formula so that the first order scatter fluence is expressed as a function of the primary photon fluence on the detector. In this work, the authors apply the scheme to experimentally obtained 6 MV cone beam CT projections in which the primary photon fluence is the unknown of interest. With the assumption that the higher-order scatter fluence is either constant or proportional to the first order scatter fluence, an iterative approach is proposed to estimate both primary and scatter fluences from projections by utilizing their relationship. The iterative approach is evaluated by comparisons with experimentally measured scatter-primary ratios of a Catphan phantom and with Monte Carlo simulations of virtual phantoms. The convergence of the iterations is fast and the accuracy of scatter correction is high. For a sufficiently long cylindrical water phantom with 10 cm of radius, the relative error of estimated primary photon fluence was within +/- 2% and +/- 4% when the phantom was projected with 6 MV and 120 kVp x-ray imaging systems, respectively. In addition, the iterative approach for scatter estimation is applied to 6 MV x-ray projections of a QUASAR and anthropomorphic phantoms (head and pelvis). The scatter correction is demonstrated to significantly improve the accuracy of the reconstructed linear attenuation coefficient and the contrast of the projections and reconstructed volumetric images generated with a linac 6 MV beam.

  7. Modelling modal gating of ion channels with hierarchical Markov models

    PubMed Central

    Fackrell, Mark; Crampin, Edmund J.; Taylor, Peter

    2016-01-01

    Many ion channels spontaneously switch between different levels of activity. Although this behaviour known as modal gating has been observed for a long time it is currently not well understood. Despite the fact that appropriately representing activity changes is essential for accurately capturing time course data from ion channels, systematic approaches for modelling modal gating are currently not available. In this paper, we develop a modular approach for building such a model in an iterative process. First, stochastic switching between modes and stochastic opening and closing within modes are represented in separate aggregated Markov models. Second, the continuous-time hierarchical Markov model, a new modelling framework proposed here, then enables us to combine these components so that in the integrated model both mode switching as well as the kinetics within modes are appropriately represented. A mathematical analysis reveals that the behaviour of the hierarchical Markov model naturally depends on the properties of its components. We also demonstrate how a hierarchical Markov model can be parametrized using experimental data and show that it provides a better representation than a previous model of the same dataset. Because evidence is increasing that modal gating reflects underlying molecular properties of the channel protein, it is likely that biophysical processes are better captured by our new approach than in earlier models. PMID:27616917

  8. Exact goodness-of-fit tests for Markov chains.

    PubMed

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps.

  9. Development of a three-dimensional high-order strand-grids approach

    NASA Astrophysics Data System (ADS)

    Tong, Oisin

    Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening

  10. Efficient high-order discontinuous Galerkin schemes with first-order hyperbolic advection-diffusion system approach

    NASA Astrophysics Data System (ADS)

    Mazaheri, Alireza; Nishikawa, Hiroaki

    2016-09-01

    We propose arbitrary high-order discontinuous Galerkin (DG) schemes that are designed based on a first-order hyperbolic advection-diffusion formulation of the target governing equations. We present, in details, the efficient construction of the proposed high-order schemes (called DG-H), and show that these schemes have the same number of global degrees-of-freedom as comparable conventional high-order DG schemes, produce the same or higher order of accuracy solutions and solution gradients, are exact for exact polynomial functions, and do not need a second-derivative diffusion operator. We demonstrate that the constructed high-order schemes give excellent quality solution and solution gradients on irregular triangular elements. We also construct a Weighted Essentially Non-Oscillatory (WENO) limiter for the proposed DG-H schemes and apply it to discontinuous problems. We also make some accuracy comparisons with conventional DG and interior penalty schemes. A relative qualitative cost analysis is also reported, which indicates that the high-order schemes produce orders of magnitude more accurate results than the low-order schemes for a given CPU time. Furthermore, we show that the proposed DG-H schemes are nearly as efficient as the DG and Interior-Penalty (IP) schemes as these schemes produce results that are relatively at the same error level for approximately a similar CPU time.

  11. Exploiting Fractional Order PID Controller Methods in Improving the Performance of Integer Order PID Controllers: A GA Based Approach

    NASA Astrophysics Data System (ADS)

    Mukherjee, Bijoy K.; Metia, Santanu

    2009-10-01

    The paper is divided into three parts. The first part gives a brief introduction to the overall paper, to fractional order PID (PIλDμ) controllers and to Genetic Algorithm (GA). In the second part, first it has been studied how the performance of an integer order PID controller deteriorates when implemented with lossy capacitors in its analog realization. Thereafter it has been shown that the lossy capacitors can be effectively modeled by fractional order terms. Then, a novel GA based method has been proposed to tune the controller parameters such that the original performance is retained even though realized with the same lossy capacitors. Simulation results have been presented to validate the usefulness of the method. Some Ziegler-Nichols type tuning rules for design of fractional order PID controllers have been proposed in the literature [11]. In the third part, a novel GA based method has been proposed which shows how equivalent integer order PID controllers can be obtained which will give performance level similar to those of the fractional order PID controllers thereby removing the complexity involved in the implementation of the latter. It has been shown with extensive simulation results that the equivalent integer order PID controllers more or less retain the robustness and iso-damping properties of the original fractional order PID controllers. Simulation results also show that the equivalent integer order PID controllers are more robust than the normal Ziegler-Nichols tuned PID controllers.

  12. Orbiting binary black hole evolutions with a multipatch high order finite-difference approach

    SciTech Connect

    Pazos, Enrique; Tiglio, Manuel; Duez, Matthew D.; Kidder, Lawrence E.; Teukolsky, Saul A.

    2009-07-15

    We present numerical simulations of orbiting black holes for around 12 cycles, using a high order multipatch approach. Unlike some other approaches, the computational speed scales almost perfectly for thousands of processors. Multipatch methods are an alternative to adaptive mesh refinement, with benefits of simplicity and better scaling for improving the resolution in the wave zone. The results presented here pave the way for multipatch evolutions of black hole-neutron star and neutron star-neutron star binaries, where high resolution grids are needed to resolve details of the matter flow.

  13. Approach for classification and taxonomy within family Rickettsiaceae based on the Formal Order Analysis.

    PubMed

    Shpynov, S; Pozdnichenko, N; Gumenuk, A

    2015-01-01

    Genome sequences of 36 Rickettsia and Orientia were analyzed using Formal Order Analysis (FOA). This approach takes into account arrangement of nucleotides in each sequence. A numerical characteristic, the average distance (remoteness) - "g" was used to compare of genomes. Our results corroborated previous separation of three groups within the genus Rickettsia, including typhus group, classic spotted fever group, and the ancestral group and Orientia as a separate genus. Rickettsia felis URRWXCal2 and R. akari Hartford were not in the same group based on FOA, therefore designation of a so-called transitional Rickettsia group could not be confirmed with this approach.

  14. A partitioned model order reduction approach to rationalise computational expenses in nonlinear fracture mechanics

    PubMed Central

    Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.

    2013-01-01

    We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055

  15. Hamilton Jacobi approach for first order actions and theories with higher derivatives

    NASA Astrophysics Data System (ADS)

    Bertin, M. C.; Pimentel, B. M.; Pompeia, P. J.

    2008-03-01

    In this work, we analyze systems described by Lagrangians with higher order derivatives in the context of the Hamilton-Jacobi formalism for first order actions. Two different approaches are studied here: the first one is analogous to the description of theories with higher derivatives in the hamiltonian formalism according to [D.M. Gitman, S.L. Lyakhovich, I.V. Tyutin, Soviet Phys. J. 26 (1983) 730; D.M. Gitman, I.V. Tyutin, Quantization of Fields with Constraints, Springer-Verlag, New York, Berlin, 1990] the second treats the case where degenerate coordinate are present, in an analogy to reference [D.M. Gitman, I.V. Tyutin, Nucl. Phys. B 630 (2002) 509]. Several examples are analyzed where a comparison between both approaches is made.

  16. Hamilton-Jacobi approach for first order actions and theories with higher derivatives

    SciTech Connect

    Bertin, M.C. Pimentel, B.M. Pompeia, P.J.

    2008-03-15

    In this work, we analyze systems described by Lagrangians with higher order derivatives in the context of the Hamilton-Jacobi formalism for first order actions. Two different approaches are studied here: the first one is analogous to the description of theories with higher derivatives in the hamiltonian formalism according to [D.M. Gitman, S.L. Lyakhovich, I.V. Tyutin, Soviet Phys. J. 26 (1983) 730; D.M. Gitman, I.V. Tyutin, Quantization of Fields with Constraints, Springer-Verlag, New York, Berlin, 1990] the second treats the case where degenerate coordinate are present, in an analogy to reference [D.M. Gitman, I.V. Tyutin, Nucl. Phys. B 630 (2002) 509]. Several examples are analyzed where a comparison between both approaches is made.

  17. Third-order coma-free point in two-mirror telescopes by a vector approach.

    PubMed

    Ren, Baichuan; Jin, Guang; Zhong, Xing

    2011-07-20

    In this paper, two-mirror telescopes having the secondary mirror decentered and/or tilted are considered. Equations for third-order coma are derived by a vector approach. Coma-free condition to remove misalignment-induced coma was obtained. The coma-free point in two-mirror telescopes is found as a conclusion of our coma-free condition, which is in better agreement with the result solved by Wilson using Schiefspiegler theory.

  18. Equilibrium Control Policies for Markov Chains

    SciTech Connect

    Malikopoulos, Andreas

    2011-01-01

    The average cost criterion has held great intuitive appeal and has attracted considerable attention. It is widely employed when controlling dynamic systems that evolve stochastically over time by means of formulating an optimization problem to achieve long-term goals efficiently. The average cost criterion is especially appealing when the decision-making process is long compared to other timescales involved, and there is no compelling motivation to select short-term optimization. This paper addresses the problem of controlling a Markov chain so as to minimize the average cost per unit time. Our approach treats the problem as a dual constrained optimization problem. We derive conditions guaranteeing that a saddle point exists for the new dual problem and we show that this saddle point is an equilibrium control policy for each state of the Markov chain. For practical situations with constraints consistent to those we study here, our results imply that recognition of such saddle points may be of value in deriving in real time an optimal control policy.

  19. Markov Chain Monte Carlo and Irreversibility

    NASA Astrophysics Data System (ADS)

    Ottobre, Michela

    2016-06-01

    Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.

  20. A second order cone complementarity approach for the numerical solution of elastoplasticity problems

    NASA Astrophysics Data System (ADS)

    Zhang, L. L.; Li, J. Y.; Zhang, H. W.; Pan, S. H.

    2013-01-01

    In this paper we present a new approach for solving elastoplastic problems as second order cone complementarity problems (SOCCPs). Specially, two classes of elastoplastic problems, i.e. the J 2 plasticity problems with combined linear kinematic and isotropic hardening laws and the Drucker-Prager plasticity problems with associative or non-associative flow rules, are taken as the examples to illustrate the main idea of our new approach. In the new approach, firstly, the classical elastoplastic constitutive equations are equivalently reformulated as second order cone complementarity conditions. Secondly, by employing the finite element method and treating the nodal displacements and the plasticity multiplier vectors of Gaussian integration points as the unknown variables, we obtain a standard SOCCP formulation for the elastoplasticity analysis, which enables the using of general SOCCP solvers developed in the field of mathematical programming be directly available in the field of computational plasticity. Finally, a semi-smooth Newton algorithm is suggested to solve the obtained SOCCPs. Numerical results of several classical plasticity benchmark problems confirm the effectiveness and robustness of the SOCCP approach.

  1. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  2. Multiensemble Markov models of molecular thermodynamics and kinetics.

    PubMed

    Wu, Hao; Paul, Fabian; Wehmeyer, Christoph; Noé, Frank

    2016-06-07

    We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models-clustering of high-dimensional spaces and modeling of complex many-state systems-with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein-ligand binding model.

  3. Multiensemble Markov models of molecular thermodynamics and kinetics

    PubMed Central

    Wu, Hao; Paul, Fabian; Noé, Frank

    2016-01-01

    We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models—clustering of high-dimensional spaces and modeling of complex many-state systems—with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein–ligand binding model. PMID:27226302

  4. Glaucoma progression detection using nonlocal Markov random field prior.

    PubMed

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Balasubramanian, Madhusudhanan; Weinreb, Robert N; Zangwill, Linda M

    2014-10-01

    Glaucoma is neurodegenerative disease characterized by distinctive changes in the optic nerve head and visual field. Without treatment, glaucoma can lead to permanent blindness. Therefore, monitoring glaucoma progression is important to detect uncontrolled disease and the possible need for therapy advancement. In this context, three-dimensional (3-D) spectral domain optical coherence tomography (SD-OCT) has been commonly used in the diagnosis and management of glaucoma patients. We present a new framework for detection of glaucoma progression using 3-D SD-OCT images. In contrast to previous works that use the retinal nerve fiber layer thickness measurement provided by commercially available instruments, we consider the whole 3-D volume for change detection. To account for the spatial voxel dependency, we propose the use of the Markov random field (MRF) model as a prior for the change detection map. In order to improve the robustness of the proposed approach, a nonlocal strategy was adopted to define the MRF energy function. To accommodate the presence of false-positive detection, we used a fuzzy logic approach to classify a 3-D SD-OCT image into a "non-progressing" or "progressing" glaucoma class. We compared the diagnostic performance of the proposed framework to the existing methods of progression detection.

  5. Glaucoma progression detection using nonlocal Markov random field prior

    PubMed Central

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A.; Balasubramanian, Madhusudhanan; Weinreb, Robert N.; Zangwill, Linda M.

    2014-01-01

    Abstract. Glaucoma is neurodegenerative disease characterized by distinctive changes in the optic nerve head and visual field. Without treatment, glaucoma can lead to permanent blindness. Therefore, monitoring glaucoma progression is important to detect uncontrolled disease and the possible need for therapy advancement. In this context, three-dimensional (3-D) spectral domain optical coherence tomography (SD-OCT) has been commonly used in the diagnosis and management of glaucoma patients. We present a new framework for detection of glaucoma progression using 3-D SD-OCT images. In contrast to previous works that use the retinal nerve fiber layer thickness measurement provided by commercially available instruments, we consider the whole 3-D volume for change detection. To account for the spatial voxel dependency, we propose the use of the Markov random field (MRF) model as a prior for the change detection map. In order to improve the robustness of the proposed approach, a nonlocal strategy was adopted to define the MRF energy function. To accommodate the presence of false-positive detection, we used a fuzzy logic approach to classify a 3-D SD-OCT image into a “non-progressing” or “progressing” glaucoma class. We compared the diagnostic performance of the proposed framework to the existing methods of progression detection. PMID:26158069

  6. Symbolic Heuristic Search for Factored Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Morris, Robert (Technical Monitor); Feng, Zheng-Zhu; Hansen, Eric A.

    2003-01-01

    We describe a planning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. State abstraction is used to avoid evaluating states individually. Forward search from a start state, guided by an admissible heuristic, is used to avoid evaluating all states. We combine these two approaches in a novel way that exploits symbolic model-checking techniques and demonstrates their usefulness for decision-theoretic planning.

  7. Markov Tracking for Agent Coordination

    NASA Technical Reports Server (NTRS)

    Washington, Richard; Lau, Sonie (Technical Monitor)

    1998-01-01

    Partially observable Markov decision processes (POMDPs) axe an attractive representation for representing agent behavior, since they capture uncertainty in both the agent's state and its actions. However, finding an optimal policy for POMDPs in general is computationally difficult. In this paper we present Markov Tracking, a restricted problem of coordinating actions with an agent or process represented as a POMDP Because the actions coordinate with the agent rather than influence its behavior, the optimal solution to this problem can be computed locally and quickly. We also demonstrate the use of the technique on sequential POMDPs, which can be used to model a behavior that follows a linear, acyclic trajectory through a series of states. By imposing a "windowing" restriction that restricts the number of possible alternatives considered at any moment to a fixed size, a coordinating action can be calculated in constant time, making this amenable to coordination with complex agents.

  8. Bayesian seismic tomography by parallel interacting Markov chains

    NASA Astrophysics Data System (ADS)

    Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas

    2014-05-01

    The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the

  9. Dominant pole placement with fractional order PID controllers: D-decomposition approach.

    PubMed

    Mandić, Petar D; Šekara, Tomislav B; Lazarević, Mihailo P; Bošković, Marko

    2017-03-01

    Dominant pole placement is a useful technique designed to deal with the problem of controlling a high order or time-delay systems with low order controller such as the PID controller. This paper tries to solve this problem by using D-decomposition method. Straightforward analytic procedure makes this method extremely powerful and easy to apply. This technique is applicable to a wide range of transfer functions: with or without time-delay, rational and non-rational ones, and those describing distributed parameter systems. In order to control as many different processes as possible, a fractional order PID controller is introduced, as a generalization of classical PID controller. As a consequence, it provides additional parameters for better adjusting system performances. The design method presented in this paper tunes the parameters of PID and fractional PID controller in order to obtain good load disturbance response with a constraint on the maximum sensitivity and sensitivity to noise measurement. Good set point response is also one of the design goals of this technique. Numerous examples taken from the process industry are given, and D-decomposition approach is compared with other PID optimization methods to show its effectiveness.

  10. A New Approach for Constructing Highly Stable High Order CESE Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    2010-01-01

    A new approach is devised to construct high order CESE schemes which would avoid the common shortcomings of traditional high order schemes including: (a) susceptibility to computational instabilities; (b) computational inefficiency due to their local implicit nature (i.e., at each mesh points, need to solve a system of linear/nonlinear equations involving all the mesh variables associated with this mesh point); (c) use of large and elaborate stencils which complicates boundary treatments and also makes efficient parallel computing much harder; (d) difficulties in applications involving complex geometries; and (e) use of problem-specific techniques which are needed to overcome stability problems but often cause undesirable side effects. In fact it will be shown that, with the aid of a conceptual leap, one can build from a given 2nd-order CESE scheme its 4th-, 6th-, 8th-,... order versions which have the same stencil and same stability conditions of the 2nd-order scheme, and also retain all other advantages of the latter scheme. A sketch of multidimensional extensions will also be provided.

  11. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    PubMed

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies.

  12. A preference-ordered discrete-gaming approach to air-combat analysis

    NASA Technical Reports Server (NTRS)

    Kelley, H. J.; Lefton, L.

    1978-01-01

    An approach to one-on-one air-combat analysis is described which employs discrete gaming of a parameterized model featuring choice between several closed-loop control policies. A preference-ordering formulation due to Falco is applied to rational choice between outcomes: win, loss, mutual capture, purposeful disengagement, draw. Approximate optimization is provided by an active-cell scheme similar to Falco's obtained by a 'backing up' process similar to that of Kopp. The approach is designed primarily for short-duration duels between craft with large-envelope weaponry. Some illustrative computations are presented for an example modeled using constant-speed vehicles and very rough estimation of energy shifts.

  13. Approach Detect Sensor System by Second Order Derivative of Laser Irradiation Area

    NASA Astrophysics Data System (ADS)

    Hayashi, Tomohide; Yano, Yoshikazu; Tsuda, Norio; Yamada, Jun

    In recent years, as a result of a large amount of greenhouse gas emission, atmosphere temperature at ground level gradually rises. Therefore the Kyoto Protocol was adopted to solve the matter in 1997. By the energy-saving law amended in 1999, it is advisable that an escalator is controlled to pause during no user. Now a photo-electric sensor is used to control escalator, but a pole to install the sensor is needed. Then, a new type of approach detection sensor using laser diode, CCD camera and CPLD, which can be built-in escalator, has been studied. This sensor can derive the irradiated area of laser beam by simple processing in which the laser beam is irradiated in only the odd field of the interlace video signal. By second order derivative of laser irradiated area, this sensor can detect only the approaching target but can not detect the target which crosses and stands in the sensing area.

  14. Template-Directed Approach Towards the Realization of Ordered Heterogeneity in Bimetallic Metal-Organic Frameworks.

    PubMed

    Kim, Daeok; Coskun, Ali

    2017-03-29

    Controlling the arrangement of different metal ions to achieve ordered heterogeneity in metal-organic frameworks (MOFs) has been a great challenge. Herein, we introduce a template-directed approach, in which a 1D metal-organic polymer incorporating well-defined binding pockets for the secondary metal ions used as a structural template and starting material for the preparation of well-ordered bimetallic MOF-74s under heterogeneous-phase hydrothermal reaction conditions in the presence of secondary metal ions such as Ni(2+) and Mg(2+) in 3 h. The resulting bimetallic MOF-74s were found to possess a nearly 1:1 metal ratio regardless of their initial stoichiometry in the reaction mixture, thus demonstrating the possibility of controlling the arrangement of metal ions within the secondary building blocks in MOFs to tune their intrinsic properties such as gas affinity.

  15. Bearing fault identification by higher order energy operator fusion: A non-resonance based approach

    NASA Astrophysics Data System (ADS)

    Faghidi, H.; Liang, M.

    2016-10-01

    We report a non-resonance based approach to bearing fault detection. This is achieved by a higher order energy operator fusion (HOEO_F) method. In this method, multiple higher order energy operators are fused to form a single simple transform to process the bearing signal obscured by noise and vibration interferences. The fusion is guided by entropy minimization. Unlike the popular high frequency resonance technique, this method does not require the information of resonance excited by the bearing fault. The effects of the HOEO_F method on signal-to-noise ratio (SNR) and signal-to-interference ratio (SIR) are illustrated in this paper. The performance of the proposed method in handling noise and interferences has been examined using both simulated and experimental data. The results indicate that the HOEO_F method outperforms both the envelope method and the original energy operator method.

  16. Unilateral pediatric "do not attempt resuscitation" orders: the pros, the cons, and a proposed approach.

    PubMed

    Mercurio, Mark R; Murray, Peter D; Gross, Ian

    2014-02-01

    A unilateral do not attempt resuscitation (DNAR) order is written by a physician without permission or assent from the patient or the patient's surrogate decision-maker. Potential justifications for the use of DNAR orders in pediatrics include the belief that attempted resuscitation offers no benefit to the patient or that the burdens would far outweigh the potential benefits. Another consideration is the patient's right to mercy, not to be made to undergo potentially painful interventions very unlikely to benefit the patient, and the physician's parallel obligation not to perform such interventions. Unilateral DNAR orders might be motivated in part by the moral distress caregivers sometimes experience when feeling forced by parents to participate in interventions that they believe are useless or cruel. Furthermore, some physicians believe that making these decisions without parental approval could spare parents needless additional emotional pain or a sense of guilt from making such a decision, particularly when imminent death is unavoidable. There are, however, several risks inherent in unilateral DNAR orders, such as overestimating one's ability to prognosticate or giving undue weight to the physician's values over those of parents, particularly with regard to predicted disability and quality of life. The law on the question of unilateral DNAR varies among states, and readers are encouraged to learn the law where they practice. Arguments in favor of, and opposed to, the use of unilateral DNAR orders are presented. In some settings, particularly when death is imminent regardless of whether resuscitation is attempted, unilateral DNAR orders should be viewed as an ethically permissible approach.

  17. Markov state models of biomolecular conformational dynamics

    PubMed Central

    Chodera, John D.; Noé, Frank

    2014-01-01

    It has recently become practical to construct Markov state models (MSMs) that reproduce the long-time statistical conformational dynamics of biomolecules using data from molecular dynamics simulations. MSMs can predict both stationary and kinetic quantities on long timescales (e.g. milliseconds) using a set of atomistic molecular dynamics simulations that are individually much shorter, thus addressing the well-known sampling problem in molecular dynamics simulation. In addition to providing predictive quantitative models, MSMs greatly facilitate both the extraction of insight into biomolecular mechanism (such as folding and functional dynamics) and quantitative comparison with single-molecule and ensemble kinetics experiments. A variety of methodological advances and software packages now bring the construction of these models closer to routine practice. Here, we review recent progress in this field, considering theoretical and methodological advances, new software tools, and recent applications of these approaches in several domains of biochemistry and biophysics, commenting on remaining challenges. PMID:24836551

  18. A general approach to develop reduced order models for simulation of solid oxide fuel cell stacks

    SciTech Connect

    Pan, Wenxiao; Bao, Jie; Lo, Chaomei; Lai, Canhai; Agarwal, Khushbu; Koeppel, Brian J.; Khaleel, Mohammad A.

    2013-06-15

    A reduced order modeling approach based on response surface techniques was developed for solid oxide fuel cell stacks. This approach creates a numerical model that can quickly compute desired performance variables of interest for a stack based on its input parameter set. The approach carefully samples the multidimensional design space based on the input parameter ranges, evaluates a detailed stack model at each of the sampled points, and performs regression for selected performance variables of interest to determine the responsive surfaces. After error analysis to ensure that sufficient accuracy is established for the response surfaces, they are then implemented in a calculator module for system-level studies. The benefit of this modeling approach is that it is sufficiently fast for integration with system modeling software and simulation of fuel cell-based power systems while still providing high fidelity information about the internal distributions of key variables. This paper describes the sampling, regression, sensitivity, error, and principal component analyses to identify the applicable methods for simulating a planar fuel cell stack.

  19. Mixed approach to incorporate self-consistency into order-N LCAO methods

    SciTech Connect

    Ordejon, P.; Artacho, E.; Soler, J.M.

    1996-12-31

    The authors present a method for selfconsistent Density Functional Theory calculations in which the effort required is proportional to the size of the system, thus allowing the application to problems with a very large size. The method is based on the LCAO approximation, and uses a mixed approach to obtain the Hamiltonian integrals between atomic orbitals with Order-N effort. They show the performance and the convergence properties of the method in several silicon and carbon systems, and in a DNA periodic chain.

  20. Action approach to cosmological perturbations: the second-order metric in matter dominance

    SciTech Connect

    Boubekeur, Lotfi; Creminelli, Paolo; Vernizzi, Filippo; Norena, Jorge

    2008-08-15

    We study nonlinear cosmological perturbations during post-inflationary evolution, using the equivalence between a perfect barotropic fluid and a derivatively coupled scalar field with Lagrangian [-({partial_derivative}{phi}){sup 2}]{sup (1+w)/2w}. Since this Lagrangian is just a special case of k-inflation, this approach is analogous to the one employed in the study of non-Gaussianities from inflation. We use this method to derive the second-order metric during matter dominance in the comoving gauge directly as a function of the primordial inflationary perturbation {zeta}. Going to Poisson gauge, we recover the metric previously derived in the literature.

  1. Arbitrary Lagrangian-Eulerian approach in reduced order modeling of a flow with a moving boundary

    NASA Astrophysics Data System (ADS)

    Stankiewicz, W.; Roszak, R.; Morzyński, M.

    2013-06-01

    Flow-induced deflections of aircraft structures result in oscillations that might turn into such a dangerous phenomena like flutter or buffeting. In this paper the design of an aeroelastic system consisting of Reduced Order Model (ROM) of the flow with a moving boundary is presented. The model is based on Galerkin projection of governing equation onto space spanned by modes obtained from high-fidelity computations. The motion of the boundary and mesh is defined in Arbitrary Lagrangian-Eulerian (ALE) approach and results in additional convective term in Galerkin system. The developed system is demonstrated on the example of a flow around an oscillating wing.

  2. Growth and Dissolution of Macromolecular Markov Chains

    NASA Astrophysics Data System (ADS)

    Gaspard, Pierre

    2016-07-01

    The kinetics and thermodynamics of free living copolymerization are studied for processes with rates depending on k monomeric units of the macromolecular chain behind the unit that is attached or detached. In this case, the sequence of monomeric units in the growing copolymer is a kth-order Markov chain. In the regime of steady growth, the statistical properties of the sequence are determined analytically in terms of the attachment and detachment rates. In this way, the mean growth velocity as well as the thermodynamic entropy production and the sequence disorder can be calculated systematically. These different properties are also investigated in the regime of depolymerization where the macromolecular chain is dissolved by the surrounding solution. In this regime, the entropy production is shown to satisfy Landauer's principle.

  3. A second order residual based predictor-corrector approach for time dependent pollutant transport

    NASA Astrophysics Data System (ADS)

    Pavan, S.; Hervouet, J.-M.; Ricchiuto, M.; Ata, R.

    2016-08-01

    We present a second order residual distribution scheme for scalar transport problems in shallow water flows. The scheme, suitable for the unsteady cases, is obtained adapting to the shallow water context the explicit Runge-Kutta schemes for scalar equations [1]. The resulting scheme is decoupled from the hydrodynamics yet the continuity equation has to be considered in order to respect some important numerical properties at discrete level. Beyond the classical characteristics of the residual formulation presented in [1,2], we introduce the possibility to iterate the corrector step in order to improve the accuracy of the scheme. Another novelty is that the scheme is based on a precise monotonicity condition which guarantees the respect of the maximum principle. We thus end up with a scheme which is mass conservative, second order accurate and monotone. These properties are checked in the numerical tests, where the proposed approach is also compared to some finite volume schemes on unstructured grids. The results obtained show the interest in adopting the predictor-corrector scheme for pollutant transport applications, where conservation of the mass, monotonicity and accuracy are the most relevant concerns.

  4. Behavior Detection using Confidence Intervals of Hidden Markov Models

    SciTech Connect

    Griffin, Christopher H

    2009-01-01

    Markov models are commonly used to analyze real-world problems. Their combination of discrete states and stochastic transitions is suited to applications with deterministic and stochastic components. Hidden Markov Models (HMMs) are a class of Markov model commonly used in pattern recognition. Currently, HMMs recognize patterns using a maximum likelihood approach. One major drawback with this approach is that data observations are mapped to HMMs without considering the number of data samples available. Another problem is that this approach is only useful for choosing between HMMs. It does not provide a criteria for determining whether or not a given HMM adequately matches the data stream. In this work, we recognize complex behaviors using HMMs and confidence intervals. The certainty of a data match increases with the number of data samples considered. Receiver Operating Characteristic curves are used to find the optimal threshold for either accepting or rejecting a HMM description. We present one example using a family of HMM's to show the utility of the proposed approach. A second example using models extracted from a database of consumer purchases provides additional evidence that this approach can perform better than existing techniques.

  5. A reduced-order approach to four-dimensional variational data assimilation using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Cao, Yanhua; Zhu, Jiang; Navon, I. M.; Luo, Zhendong

    2007-04-01

    Four-dimensional variational data assimilation (4DVAR) is a powerful tool for data assimilation in meteorology and oceanography. However, a major hurdle in use of 4DVAR for realistic general circulation models is the dimension of the control space (generally equal to the size of the model state variable and typically of order 107-108) and the high computational cost in computing the cost function and its gradient that require integration model and its adjoint model.In this paper, we propose a 4DVAR approach based on proper orthogonal decomposition (POD). POD is an efficient way to carry out reduced order modelling by identifying the few most energetic modes in a sequence of snapshots from a time-dependent system, and providing a means of obtaining a low-dimensional description of the system's dynamics. The POD-based 4DVAR not only reduces the dimension of control space, but also reduces the size of dynamical model, both in dramatic ways. The novelty of our approach also consists in the inclusion of adaptability, applied when in the process of iterative control the new control variables depart significantly from the ones on which the POD model was based upon. In addition, these approaches also allow to conveniently constructing the adjoint model.The proposed POD-based 4DVAR methods are tested and demonstrated using a reduced gravity wave ocean model in Pacific domain in the context of identical twin data assimilation experiments. A comparison with data assimilation experiments in the full model space shows that with an appropriate selection of the basis functions the optimization in the POD space is able to provide accurate results at a reduced computational cost. The POD-based 4DVAR methods have the potential to approximate the performance of full order 4DVAR with less than 1/100 computer time of the full order 4DVAR. The HFTN (Hessian-free truncated-Newton)algorithm benefits most from the order reduction (see (Int. J. Numer. Meth. Fluids, in press)) since

  6. Comparing quantum versus Markov random walk models of judgements measured by rating scales

    PubMed Central

    Wang, Z.; Busemeyer, J. R.

    2016-01-01

    Quantum and Markov random walk models are proposed for describing how people evaluate stimuli using rating scales. To empirically test these competing models, we conducted an experiment in which participants judged the effectiveness of public health service announcements from either their own personal perspective or from the perspective of another person. The order of the self versus other judgements was manipulated, which produced significant sequential effects. The quantum and Markov models were fitted to the data using the same number of parameters, and the model comparison strongly supported the quantum over the Markov model. PMID:26621984

  7. Operations and support cost modeling using Markov chains

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  8. Fuzzy Markov random fields versus chains for multispectral image segmentation.

    PubMed

    Salzenstein, Fabien; Collet, Christophe

    2006-11-01

    This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.

  9. Bayesian Markov models consistently outperform PWMs at predicting motifs in nucleotide sequences

    PubMed Central

    Siebert, Matthias; Söding, Johannes

    2016-01-01

    Position weight matrices (PWMs) are the standard model for DNA and RNA regulatory motifs. In PWMs nucleotide probabilities are independent of nucleotides at other positions. Models that account for dependencies need many parameters and are prone to overfitting. We have developed a Bayesian approach for motif discovery using Markov models in which conditional probabilities of order k − 1 act as priors for those of order k. This Bayesian Markov model (BaMM) training automatically adapts model complexity to the amount of available data. We also derive an EM algorithm for de-novo discovery of enriched motifs. For transcription factor binding, BaMMs achieve significantly (P    =  1/16) higher cross-validated partial AUC than PWMs in 97% of 446 ChIP-seq ENCODE datasets and improve performance by 36% on average. BaMMs also learn complex multipartite motifs, improving predictions of transcription start sites, polyadenylation sites, bacterial pause sites, and RNA binding sites by 26–101%. BaMMs never performed worse than PWMs. These robust improvements argue in favour of generally replacing PWMs by BaMMs. PMID:27288444

  10. The application of the Gibbs-Bogoliubov-Feynman inequality in mean field calculations for Markov random fields.

    PubMed

    Zhang, J

    1996-01-01

    The Gibbs-Bogoliubov-Feynman (GBF) inequality of statistical mechanics is adopted, with an information-theoretic interpretation, as a general optimization framework for deriving and examining various mean field approximations for Markov random fields (MRF's). The efficacy of this approach is demonstrated through the compound Gauss-Markov (CGM) model, comparisons between different mean field approximations, and experimental results in image restoration.

  11. Markov and semi-Markov processes as a failure rate

    NASA Astrophysics Data System (ADS)

    Grabski, Franciszek

    2016-06-01

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  12. New approach for identifying the zero-order fringe in variable wavelength interferometry

    NASA Astrophysics Data System (ADS)

    Galas, Jacek; Litwin, Dariusz; Daszkiewicz, Marek

    2016-12-01

    The family of VAWI techniques (for transmitted and reflected light) is especially efficient for characterizing objects, when in the interference system the optical path difference exceeds a few wavelengths. The classical approach that consists in measuring the deflection of interference fringes fails because of strong edge effects. Broken continuity of interference fringes prevents from correct identification of the zero order fringe, which leads to significant errors. The family of these methods has been proposed originally by Professor Pluta in the 1980s but that time image processing facilities and computers were hardly available. Automated devices unfold a completely new approach to the classical measurement procedures. The Institute team has taken that new opportunity and transformed the technique into fully automated measurement devices offering commercial readiness of industry-grade quality. The method itself has been modified and new solutions and algorithms simultaneously have extended the field of application. This has concerned both construction aspects of the systems and software development in context of creating computerized instruments. The VAWI collection of instruments constitutes now the core of the Institute commercial offer. It is now practically applicable in industrial environment for measuring textile and optical fibers, strips of thin films, testing of wave plates and nonlinear affects in different materials. This paper describes new algorithms for identifying the zero order fringe, which increases the performance of the system as a whole and presents some examples of measurements of optical elements.

  13. Next-to-leading order gravitational spin-orbit coupling in an effective field theory approach

    SciTech Connect

    Levi, Michele

    2010-11-15

    We use an effective field theory (EFT) approach to calculate the next-to-leading order (NLO) gravitational spin-orbit interaction between two spinning compact objects. The NLO spin-orbit interaction provides the most computationally complex sector of the NLO spin effects, previously derived within the EFT approach. In particular, it requires the inclusion of nonstationary cubic self-gravitational interaction, as well as the implementation of a spin supplementary condition (SSC) at higher orders. The EFT calculation is carried out in terms of the nonrelativistic gravitational field parametrization, making the calculation more efficient with no need to rely on automated computations, and illustrating the coupling hierarchy of the different gravitational field components to the spin and mass sources. Finally, we show explicitly how to relate the EFT derived spin results to the canonical results obtained with the Arnowitt-Deser-Misner (ADM) Hamiltonian formalism. This is done using noncanonical transformations, required due to the implementation of covariant SSC, as well as canonical transformations at the level of the Hamiltonian, with no need to resort to the equations of motion or the Dirac brackets.

  14. A unidirectional approach for d-dimensional finite element methods for higher order on sparse grids

    SciTech Connect

    Bungartz, H.J.

    1996-12-31

    In the last years, sparse grids have turned out to be a very interesting approach for the efficient iterative numerical solution of elliptic boundary value problems. In comparison to standard (full grid) discretization schemes, the number of grid points can be reduced significantly from O(N{sup d}) to O(N(log{sub 2}(N)){sup d-1}) in the d-dimensional case, whereas the accuracy of the approximation to the finite element solution is only slightly deteriorated: For piecewise d-linear basis functions, e. g., an accuracy of the order O(N{sup - 2}(log{sub 2}(N)){sup d-1}) with respect to the L{sub 2}-norm and of the order O(N{sup -1}) with respect to the energy norm has been shown. Furthermore, regular sparse grids can be extended in a very simple and natural manner to adaptive ones, which makes the hierarchical sparse grid concept applicable to problems that require adaptive grid refinement, too. An approach is presented for the Laplacian on a uinit domain in this paper.

  15. An analytical approach to estimating the first order x-ray scatter in heterogeneous medium.

    PubMed

    Yao, Weiguang; Leszczynski, Konrad W

    2009-07-01

    X-ray scatter estimation in heterogeneous medium is a challenge in improving the quality of diagnostic projection images and volumetric image reconstruction. For Compton scatter, the statistical behavior of the first order scatter can be accurately described by using the Klein-Nishina expression for Compton scattering cross section provided that the exact information of the medium including the geometry and the attenuation, which in fact is unknown, is known. The authors present an approach to approximately separate the unknowns from the Klein-Nishina formula and express the unknown part by the primary x-ray intensity at the detector. The approximation is fitted to the exact solution of the Klein-Nishina formulas by introducing one parameter, whose value is shown to be not sensitive to the linear attenuation coefficient and thickness of the scatterer. The performance of the approach is evaluated by comparing the result with those from the Klein-Nishina formula and Monte Carlo simulations. The approximation is close to the exact solution and the Monte Carlo simulation result for parallel and cone beam imaging systems with various field sizes, air gaps, and mono- and polyenergy of primary photons and for nonhomogeneous scatterer with various geometries of slabs and cylinders. For a wide range of x-ray energy including those often used in kilo- and megavoltage cone beam computed tomographies, the first order scatter fluence at the detector is mainly from Compton scatter. Thus, the approximate relation between the first order scatter and primary fluences at the detector is useful for scatter estimation in physical phantom projections.

  16. A Fourth-Order Spline Collocation Approach for the Solution of a Boundary Layer Problem

    NASA Astrophysics Data System (ADS)

    Sayfy, Khoury, S.

    2011-09-01

    A finite element approach, based on cubic B-spline collocation, is presented for the numerical solution of a class of singularly perturbed two-point boundary value problems that possess a boundary layer at one or two end points. Due to the existence of a layer, the problem is handled using an adaptive spline collocation approach constructed over a non-uniform Shishkin-like meshes, defined via a carefully selected generating function. To tackle the case of nonlinearity, if it exists, an iterative scheme arising from Newton's method is employed. The rate of convergence is verified to be of fourth-order and is calculated using the double-mesh principle. The efficiency and applicability of the method are demonstrated by applying it to a number of linear and nonlinear examples. The numerical solutions are compared with both analytical and other existing numerical solutions in the literature. The numerical results confirm that this method is superior when contrasted with other accessible approaches and yields more accurate solutions.

  17. An Integral-Direct Linear-Scaling Second-Order Møller-Plesset Approach.

    PubMed

    Nagy, Péter R; Samu, Gyula; Kállay, Mihály

    2016-10-11

    An integral-direct, iteration-free, linear-scaling, local second-order Møller-Plesset (MP2) approach is presented, which is also useful for spin-scaled MP2 calculations as well as for the efficient evaluation of the perturbative terms of double-hybrid density functionals. The method is based on a fragmentation approximation: the correlation contributions of the individual electron pairs are evaluated in domains constructed for the corresponding localized orbitals, and the correlation energies of distant electron pairs are computed with multipole expansions. The required electron repulsion integrals are calculated directly invoking the density fitting approximation; the storage of integrals and intermediates is avoided. The approach also utilizes natural auxiliary functions to reduce the size of the auxiliary basis of the domains and thereby the operation count and memory requirement. Our test calculations show that the approach recovers 99.9% of the canonical MP2 correlation energy and reproduces reaction energies with an average (maximum) error below 1 kJ/mol (4 kJ/mol). Our benchmark calculations demonstrate that the new method enables MP2 calculations for molecules with more than 2300 atoms and 26000 basis functions on a single processor.

  18. A quantitative dynamical systems approach to differential learning: self-organization principle and order parameter equations.

    PubMed

    Frank, T D; Michelbrink, M; Beckmann, H; Schöllhorn, W I

    2008-01-01

    Differential learning is a learning concept that assists subjects to find individual optimal performance patterns for given complex motor skills. To this end, training is provided in terms of noisy training sessions that feature a large variety of between-exercises differences. In several previous experimental studies it has been shown that performance improvement due to differential learning is higher than due to traditional learning and performance improvement due to differential learning occurs even during post-training periods. In this study we develop a quantitative dynamical systems approach to differential learning. Accordingly, differential learning is regarded as a self-organized process that results in the emergence of subject- and context-dependent attractors. These attractors emerge due to noise-induced bifurcations involving order parameters in terms of learning rates. In contrast, traditional learning is regarded as an externally driven process that results in the emergence of environmentally specified attractors. Performance improvement during post-training periods is explained as an hysteresis effect. An order parameter equation for differential learning involving a fourth-order polynomial potential is discussed explicitly. New predictions concerning the relationship between traditional and differential learning are derived.

  19. Adaptive relaxation for the steady-state analysis of Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham

    1994-01-01

    We consider a variant of the well-known Gauss-Seidel method for the solution of Markov chains in steady state. Whereas the standard algorithm visits each state exactly once per iteration in a predetermined order, the alternative approach uses a dynamic strategy. A set of states to be visited is maintained which can grow and shrink as the computation progresses. In this manner, we hope to concentrate the computational work in those areas of the chain in which maximum improvement in the solution can be achieved. We consider the adaptive approach both as a solver in its own right and as a relaxation method within the multi-level algorithm. Experimental results show significant computational savings in both cases.

  20. Matrix approach to discrete fractional calculus III: non-equidistant grids, variable step length and distributed orders.

    PubMed

    Podlubny, Igor; Skovranek, Tomas; Vinagre Jara, Blas M; Petras, Ivo; Verbitsky, Viktor; Chen, YangQuan

    2013-05-13

    In this paper, we further develop Podlubny's matrix approach to discretization of integrals and derivatives of non-integer order. Numerical integration and differentiation on non-equidistant grids is introduced and illustrated by several examples of numerical solution of differential equations with fractional derivatives of constant orders and with distributed-order derivatives. In this paper, for the first time, we present a variable-step-length approach that we call 'the method of large steps', because it is applied in combination with the matrix approach for each 'large step'. This new method is also illustrated by an easy-to-follow example. The presented approach allows fractional-order and distributed-order differentiation and integration of non-uniformly sampled signals, and opens the way to development of variable- and adaptive-step-length techniques for fractional- and distributed-order differential equations.

  1. Liouville equation and Markov chains: epistemological and ontological probabilities

    NASA Astrophysics Data System (ADS)

    Costantini, D.; Garibaldi, U.

    2006-06-01

    The greatest difficulty of a probabilistic approach to the foundations of Statistical Mechanics lies in the fact that for a system ruled by classical or quantum mechanics a basic description exists, whose evolution is deterministic. For such a system any kind of irreversibility is impossible in principle. The probability used in this approach is epistemological. On the contrary for irreducible aperiodic Markov chains the invariant measure is reached with probability one whatever the initial conditions. Almost surely the uniform distributions, on which the equilibrium treatment of quantum and classical perfect gases is based, are reached when time goes by. The transition probability for binary collision, deduced by the Ehrenfest-Brillouin model, points out an irreducible aperiodic Markov chain and thus an equilibrium distribution. This means that we are describing the temporal probabilistic evolution of the system. The probability involved in this evolution is ontological.

  2. Optimal Control of Markov Processes with Age-Dependent Transition Rates

    SciTech Connect

    Ghosh, Mrinal K. Saha, Subhamay

    2012-10-15

    We study optimal control of Markov processes with age-dependent transition rates. The control policy is chosen continuously over time based on the state of the process and its age. We study infinite horizon discounted cost and infinite horizon average cost problems. Our approach is via the construction of an equivalent semi-Markov decision process. We characterise the value function and optimal controls for both discounted and average cost cases.

  3. An abstract specification language for Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1985-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  4. An abstract language for specifying Markov reliability models

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1986-01-01

    Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.

  5. The sharp constant in Markov's inequality for the Laguerre weight

    SciTech Connect

    Sklyarov, Vyacheslav P

    2009-06-30

    We prove that the polynomial of degree n that deviates least from zero in the uniformly weighted metric with Laguerre weight is the extremal polynomial in Markov's inequality for the norm of the kth derivative. Moreover, the corresponding sharp constant does not exceed (8{sup k} n {exclamation_point} k {exclamation_point})/((n-k){exclamation_point} (2k){exclamation_point}). For the derivative of a fixed order this bound is asymptotically sharp as n{yields}{infinity}. Bibliography: 20 items.

  6. Photoassociation of a cold-atom-molecule pair. II. Second-order perturbation approach

    SciTech Connect

    Lepers, M.; Vexiau, R.; Bouloufa, N.; Dulieu, O.; Kokoouline, V.

    2011-04-15

    The electrostatic interaction between an excited atom and a diatomic ground-state molecule in an arbitrary rovibrational level at large mutual separations is investigated with a general second-order perturbation theory, in the perspective of modeling the photoassociation between cold atoms and molecules. We find that the combination of quadrupole-quadrupole and van der Waals interactions competes with the rotational energy of the dimer, limiting the range of validity of the perturbative approach to distances larger than 100 Bohr radii. Numerical results are given for the long-range interaction between Cs and Cs{sub 2}, showing that the photoassociation is probably efficient for any Cs{sub 2} rotational energy.

  7. Using Games to Teach Markov Chains

    ERIC Educational Resources Information Center

    Johnson, Roger W.

    2003-01-01

    Games are promoted as examples for classroom discussion of stationary Markov chains. In a game context Markov chain terminology and results are made concrete, interesting, and entertaining. Game length for several-player games such as "Hi Ho! Cherry-O" and "Chutes and Ladders" is investigated and new, simple formulas are given. Slight…

  8. Semi-Markov Unreliability-Range Evaluator

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1988-01-01

    Reconfigurable, fault-tolerant systems modeled. Semi-Markov unreliability-range evaluator (SURE) computer program is software tool for analysis of reliability of reconfigurable, fault-tolerant systems. Based on new method for computing death-state probabilities of semi-Markov model. Computes accurate upper and lower bounds on probability of failure of system. Written in PASCAL.

  9. Building Simple Hidden Markov Models. Classroom Notes

    ERIC Educational Resources Information Center

    Ching, Wai-Ki; Ng, Michael K.

    2004-01-01

    Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.

  10. An introduction to hidden Markov models.

    PubMed

    Schuster-Böckler, Benjamin; Bateman, Alex

    2007-06-01

    This unit introduces the concept of hidden Markov models in computational biology. It describes them using simple biological examples, requiring as little mathematical knowledge as possible. The unit also presents a brief history of hidden Markov models and an overview of their current applications before concluding with a discussion of their limitations.

  11. A formulation of a matrix sparsity approach for the quantum ordered search algorithm

    NASA Astrophysics Data System (ADS)

    Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran

    One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN‑1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.

  12. Different coupled atmosphere-recharge oscillator Low Order Models for ENSO: a projection approach.

    NASA Astrophysics Data System (ADS)

    Bianucci, Marco; Mannella, Riccardo; Merlino, Silvia; Olivieri, Andrea

    2016-04-01

    El Ninõ-Southern Oscillation (ENSO) is a large scale geophysical phenomenon where, according to the celebrated recharge oscillator model (ROM), the Ocean slow variables given by the East Pacific Sea Surface Temperature (SST) and the average thermocline depth (h), interact with some fast "irrelevant" ones, representing mostly the atmosphere (the westerly wind burst and the Madden-Julian Oscillation). The fast variables are usually inserted in the model as an external stochastic forcing. In a recent work (M. Bianucci, "Analytical probability density function for the statistics of the ENSO phenomenon: asymmetry and power law tail" Geophysical Research Letters, under press) the author, using a projection approach applied to general deterministic coupled systems, gives a physically reasonable explanation for the use of stochastic models for mimicking the apparent random features of the ENSO phenomenon. Moreover, in the same paper, assuming that the interaction between the ROM and the fast atmosphere is of multiplicative type, i.e., it depends on the SST variable, an analytical expression for the equilibrium density function of the anomaly SST is obtained. This expression fits well the data from observations, reproducing the asymmetry and the power law tail of the histograms of the NINÕ3 index. Here, using the same theoretical approach, we consider and discuss different kind of interactions between the ROM and the other perturbing variables, and we take into account also non linear ROM as a low order model for ENSO. The theoretical and numerical results are then compared with data from observations.

  13. Tensor-entanglement-filtering renormalization approach and symmetry-protected topological order

    NASA Astrophysics Data System (ADS)

    Gu, Zheng-Cheng; Wen, Xiao-Gang

    2009-10-01

    We study the renormalization group flow of the Lagrangian for statistical and quantum systems by representing their path integral in terms of a tensor network. Using a tensor-entanglement-filtering renormalization approach that removes local entanglement and produces a coarse-grained lattice, we show that the resulting renormalization flow of the tensors in the tensor network has a nice fixed-point structure. The isolated fixed-point tensors Tinv plus the symmetry group Gsym of the tensors (i.e., the symmetry group of the Lagrangian) characterize various phases of the system. Such a characterization can describe both the symmetry breaking phases and topological phases, as illustrated by two-dimensional (2D) statistical Ising model, 2D statistical loop-gas model, and 1+1D quantum spin-1/2 and spin-1 models. In particular, using such a (Gsym,Tinv) characterization, we show that the Haldane phase for a spin-1 chain is a phase protected by the time-reversal, parity, and translation symmetries. Thus the Haldane phase is a symmetry-protected topological phase. The (Gsym,Tinv) characterization is more general than the characterizations based on the boundary spins and string order parameters. The tensor renormalization approach also allows us to study continuous phase transitions between symmetry breaking phases and/or topological phases. The scaling dimensions and the central charges for the critical points that describe those continuous phase transitions can be calculated from the fixed-point tensors at those critical points.

  14. Toroidal figures of equilibrium from a second-order accurate, accelerated SCF method with subgrid approach

    NASA Astrophysics Data System (ADS)

    Huré, J.-M.; Hersant, F.

    2017-02-01

    We compute the structure of a self-gravitating torus with polytropic equation of state (EOS) rotating in an imposed centrifugal potential. The Poisson solver is based on isotropic multigrid with optimal covering factor (fluid section-to-grid area ratio). We work at second order in the grid resolution for both finite difference and quadrature schemes. For soft EOS (i.e. polytropic index n ≥ 1), the underlying second order is naturally recovered for boundary values and any other integrated quantity sensitive to the mass density (mass, angular momentum, volume, virial parameter, etc.), i.e. errors vary with the number N of nodes per direction as ˜1/N2. This is, however, not observed for purely geometrical quantities (surface area, meridional section area, volume), unless a subgrid approach is considered (i.e. boundary detection). Equilibrium sequences are also much better described, especially close to critical rotation. Yet another technical effort is required for hard EOS (n < 1), due to infinite mass density gradients at the fluid surface. We fix the problem by using kernel splitting. Finally, we propose an accelerated version of the self-consistent field (SCF) algorithm based on a node-by-node pre-conditioning of the mass density at each step. The computing time is reduced by a factor of 2 typically, regardless of the polytropic index. There is a priori no obstacle to applying these results and techniques to ellipsoidal configurations and even to 3D configurations.

  15. A parallel approach for image segmentation by numerical minimization of a second-order functional

    NASA Astrophysics Data System (ADS)

    Zanella, Riccardo; Zanetti, Massimo; Ruggiero, Valeria

    2016-10-01

    Because of its attractive features, image segmentation has shown to be a promising tool in remote sensing. A known drawback about its implementation is computational complexity. Recently in [1] an effcient numerical method has been proposed for the minimization of a second-order variational approximation of the Blake-Zissermann functional. The method is an especially tailored version of the block-coordinate descent algorithm (BCDA). In order to enable the segmentation of large-size gridded data, such as Digital Surface Models, we combine a domain decomposition technique with BCDA and a parallel interconnection rule among blocks of variables. We aim to show that a simple tiling strategy enables us to treat large images even in a commodity multicore CPU, with no need of specific post-processing on tiles junctions. From the point of view of the performance, little computational effort is required to separate data in subdomains and the running time is mainly spent in concurrently solving the independent subproblems. Numerical results are provided to evaluate the effectiveness of the proposed parallel approach.

  16. Self-energy effects in the Polchinski and Wick-ordered renormalization-group approaches

    NASA Astrophysics Data System (ADS)

    Katanin, A.

    2011-12-01

    I discuss functional renormalization group (fRG) schemes, which allow for non-perturbative treatment of the self-energy effects and do not rely on the one-particle irreducible functional. In particular, I consider the Polchinski or Wick-ordered scheme with amputation of full (instead of bare) Green functions, as well as more general schemes, and establish their relation to the ‘dynamical adjustment propagator’ scheme by Salmhofer (2007 Ann. Phys., Lpz. 16 171). While in the Polchinski scheme the amputation of full (instead of bare) Green functions improves treatment of the self-energy effects, the structure of the corresponding equations is not suitable to treat strong-coupling problems; it is also not evident how the mean-field solution of these problems is recovered in this scheme. For the Wick-ordered scheme, fully or partly excluding tadpole diagrams one can obtain forms of fRG hierarchy, which are suitable to treat strong-coupling problems. In particular, I emphasize the usefulness of the schemes, which are local in the cutoff parameter, and compare them to the one-particle irreducible approach.

  17. Structure determination of a partially ordered layered silicate material with an NMR crystallography approach.

    PubMed

    Brouwer, Darren Henry; Cadars, Sylvian; Hotke, Kathryn; Van Huizen, Jared; Van Huizen, Nicholas

    2017-03-01

    Structure determination of layered materials can present challenges for conventional diffraction methods due to the fact that such materials often lack full three-dimensional periodicity since adjacent layers may not stack in an orderly and regular fashion. In such cases, NMR crystallography strategies involving a combination of solid-state NMR spectroscopy, powder X-ray diffraction, and computational chemistry methods can often reveal structural details that cannot be acquired from diffraction alone. We present here the structure determination of a surfactant-templated layered silicate material that lacks full three-dimensional crystallinity using such an NMR crystallography approach. Through a combination of powder X-ray diffraction and advanced (29)Si solid-state NMR spectroscopy, it is revealed that the structure of the silicate layer of this layered silicate material templated with cetyltrimethylammonium surfactant cations is isostructural with the silicate layer of a previously reported material referred to as ilerite, octosilicate, or RUB-18. High-field (1)H NMR spectroscopy reveals differences between the materials in terms of the ordering of silanol groups on the surfaces of the layers, as well as the contents of the inter-layer space.

  18. Gaussian approach for phase ordering in nonconserved scalar systems with long-range interactions

    NASA Astrophysics Data System (ADS)

    Filipe, J. A. N.; Bray, A. J.

    1995-01-01

    We have applied the Gaussian auxiliary field method to a nonconserved scalar system with attractive long-range interactions, falling off with distance as 1/rd+σ, where d is the spatial dimension and 0<σ<2. This study provides a test bed for the approach and shows some of the difficulties encountered in constructing a closed equation for the pair correlation function. For the relation φ=φ(m) between the order parameter φ and the auxiliary field m, the usual choice of the equilibrium interfacial profile is made. The equation obtained for the equal-time two-point correlation function is studied in the limiting cases of small and large values of the scaling variable. A Porod regime at short distance and an asymptotic power-law decay at large distance are obtained. The theory is not, however, consistent with the expected growth law and attempts to retrieve the correct growth lead to inconsistencies. These results indicate a failure of the Gaussian assumption for this system, when used in the context of the bulk dynamics. This statement holds at least within the present form of the mapping φ=φ(m), which appears to be the most natural choice, as well as the one consistent with the emergence of the Porod regime. By contrast, Ohta and Hayakawa have recenlty succeeded in implementing a Gaussian approach based on the interfacial dynamics of this system [Physica A 204, 482 (1994)]. This clearly suggests that, beyond the simplicity of short-range ``model A'' dynamics, a Gaussian approach can only capture the essential physical features if the crucial role of wall motion in domain growth is explicitly considered.

  19. Order Batching in Warehouses by Minimizing Total Tardiness: A Hybrid Approach of Weighted Association Rule Mining and Genetic Algorithms

    PubMed Central

    Taheri, Shahrooz; Mat Saman, Muhamad Zameri; Wong, Kuan Yew

    2013-01-01

    One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach. PMID:23864823

  20. Order batching in warehouses by minimizing total tardiness: a hybrid approach of weighted association rule mining and genetic algorithms.

    PubMed

    Azadnia, Amir Hossein; Taheri, Shahrooz; Ghadimi, Pezhman; Saman, Muhamad Zameri Mat; Wong, Kuan Yew

    2013-01-01

    One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.

  1. Markov reliability models for digital flight control systems

    NASA Technical Reports Server (NTRS)

    Mcgough, John; Reibman, Andrew; Trivedi, Kishor

    1989-01-01

    The reliability of digital flight control systems can often be accurately predicted using Markov chain models. The cost of numerical solution depends on a model's size and stiffness. Acyclic Markov models, a useful special case, are particularly amenable to efficient numerical solution. Even in the general case, instantaneous coverage approximation allows the reduction of some cyclic models to more readily solvable acyclic models. After considering the solution of single-phase models, the discussion is extended to phased-mission models. Phased-mission reliability models are classified based on the state restoration behavior that occurs between mission phases. As an economical approach for the solution of such models, the mean failure rate solution method is introduced. A numerical example is used to show the influence of fault-model parameters and interphase behavior on system unreliability.

  2. Markov chain decision model for urinary incontinence procedures.

    PubMed

    Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha

    2017-03-13

    Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.

  3. Markov state modeling of sliding friction.

    PubMed

    Pellegrini, F; Landes, François P; Laio, A; Prestipino, S; Tosatti, E

    2016-11-01

    Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.

  4. Markov state modeling of sliding friction

    NASA Astrophysics Data System (ADS)

    Pellegrini, F.; Landes, François P.; Laio, A.; Prestipino, S.; Tosatti, E.

    2016-11-01

    Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.

  5. Clustering metagenomic sequences with interpolated Markov models

    PubMed Central

    2010-01-01

    Background Sequencing of environmental DNA (often called metagenomics) has shown tremendous potential to uncover the vast number of unknown microbes that cannot be cultured and sequenced by traditional methods. Because the output from metagenomic sequencing is a large set of reads of unknown origin, clustering reads together that were sequenced from the same species is a crucial analysis step. Many effective approaches to this task rely on sequenced genomes in public databases, but these genomes are a highly biased sample that is not necessarily representative of environments interesting to many metagenomics projects. Results We present SCIMM (Sequence Clustering with Interpolated Markov Models), an unsupervised sequence clustering method. SCIMM achieves greater clustering accuracy than previous unsupervised approaches. We examine the limitations of unsupervised learning on complex datasets, and suggest a hybrid of SCIMM and supervised learning method Phymm called PHYSCIMM that performs better when evolutionarily close training genomes are available. Conclusions SCIMM and PHYSCIMM are highly accurate methods to cluster metagenomic sequences. SCIMM operates entirely unsupervised, making it ideal for environments containing mostly novel microbes. PHYSCIMM uses supervised learning to improve clustering in environments containing microbial strains from well-characterized genera. SCIMM and PHYSCIMM are available open source from http://www.cbcb.umd.edu/software/scimm. PMID:21044341

  6. Triangular Alignment (TAME). A Tensor-based Approach for Higher-order Network Alignment

    SciTech Connect

    Mohammadi, Shahin; Gleich, David F.; Kolda, Tamara G.; Grama, Ananth

    2015-11-01

    Network alignment is an important tool with extensive applications in comparative interactomics. Traditional approaches aim to simultaneously maximize the number of conserved edges and the underlying similarity of aligned entities. We propose a novel formulation of the network alignment problem that extends topological similarity to higher-order structures and provide a new objective function that maximizes the number of aligned substructures. This objective function corresponds to an integer programming problem, which is NP-hard. Consequently, we approximate this objective function as a surrogate function whose maximization results in a tensor eigenvalue problem. Based on this formulation, we present an algorithm called Triangular AlignMEnt (TAME), which attempts to maximize the number of aligned triangles across networks. We focus on alignment of triangles because of their enrichment in complex networks; however, our formulation and resulting algorithms can be applied to general motifs. Using a case study on the NAPABench dataset, we show that TAME is capable of producing alignments with up to 99% accuracy in terms of aligned nodes. We further evaluate our method by aligning yeast and human interactomes. Our results indicate that TAME outperforms the state-of-art alignment methods both in terms of biological and topological quality of the alignments.

  7. First passage time Markov chain analysis of rare events for kinetic Monte Carlo: double kink nucleation during dislocation glide

    NASA Astrophysics Data System (ADS)

    Deo, C. S.; Srolovitz, D. J.

    2002-09-01

    We describe a first passage time Markov chain analysis of rare events in kinetic Monte Carlo (kMC) simulations and demonstrate how this analysis may be used to enhance kMC simulations of dislocation glide. Dislocation glide is described by the kink mechanism, which involves double kink nucleation, kink migration and kink-kink annihilation. Double kinks that nucleate on straight dislocations are unstable at small kink separations and tend to recombine immediately following nucleation. A very small fraction (<0.001) of nucleating double kinks survive to grow to a stable kink separation. The present approach replaces all of the events that lead up to the formation of a stable kink with a simple numerical calculation of the time required for stable kink formation. In this paper, we treat the double kink nucleation process as a temporally homogeneous birth-death Markov process and present a first passage time analysis of the Markov process in order to calculate the nucleation rate of a double kink with a stable kink separation. We discuss two methods to calculate the first passage time; one computes the distribution and the average of the first passage time, while the other uses a recursive relation to calculate the average first passage time. The average first passage times calculated by both approaches are shown to be in excellent agreement with direct Monte Carlo simulations for four idealized cases of double kink nucleation. Finally, we apply this approach to double kink nucleation on a screw dislocation in molybdenum and obtain the rates for formation of stable double kinks as a function of applied stress and temperature. Equivalent kMC simulations are too inefficient to be performed using commonly available computational resources.

  8. Entropy and long-range memory in random symbolic additive Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2016-06-01

    The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.

  9. Incorporating teleconnection information into reservoir operating policies using Stochastic Dynamic Programming and a Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Turner, Sean; Galelli, Stefano; Wilcox, Karen

    2015-04-01

    Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating

  10. Continuous-Time Semi-Markov Models in Health Economic Decision Making: An Illustrative Example in Heart Failure Disease Management.

    PubMed

    Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe

    2016-01-01

    Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity.

  11. LTI system order reduction approach based on asymptotical equivalence and the Co-operation of biology-related algorithms

    NASA Astrophysics Data System (ADS)

    Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.

    2017-02-01

    A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.

  12. Markov decision processes in natural resources management: observability and uncertainty

    USGS Publications Warehouse

    Williams, Byron K.

    2015-01-01

    The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.

  13. Dynamic Programming for Structured Continuous Markov Decision Problems

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu

    2004-01-01

    We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.

  14. Markov decision processes in natural resources management: Observability and uncertainty

    USGS Publications Warehouse

    Williams, B.K.

    2009-01-01

    The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.

  15. Regional land salinization assessment and simulation through cellular automaton-Markov modeling and spatial pattern analysis.

    PubMed

    Zhou, De; Lin, Zhulu; Liu, Liming

    2012-11-15

    Land salinization and desalinization are complex processes affected by both biophysical and human-induced driving factors. Conventional approaches of land salinization assessment and simulation are either too time consuming or focus only on biophysical factors. The cellular automaton (CA)-Markov model, when coupled with spatial pattern analysis, is well suited for regional assessments and simulations of salt-affected landscapes since both biophysical and socioeconomic data can be efficiently incorporated into a geographic information system framework. Our hypothesis set forth that the CA-Markov model can serve as an alternative tool for regional assessment and simulation of land salinization or desalinization. Our results suggest that the CA-Markov model, when incorporating biophysical and human-induced factors, performs better than the model which did not account for these factors when simulating the salt-affected landscape of the Yinchuan Plain (China) in 2009. In general, the CA-Markov model is best suited for short-term simulations and the performance of the CA-Markov model is largely determined by the availability of high-quality, high-resolution socioeconomic data. The coupling of the CA-Markov model with spatial pattern analysis provides an improved understanding of spatial and temporal variations of salt-affected landscape changes and an option to test different soil management scenarios for salinity management.

  16. Observation uncertainty in reversible Markov chains.

    PubMed

    Metzner, Philipp; Weber, Marcus; Schütte, Christof

    2010-09-01

    In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .

  17. Application of Markov Graphs in Marketing

    NASA Astrophysics Data System (ADS)

    Bešić, C.; Sajfert, Z.; Đorđević, D.; Sajfert, V.

    2007-04-01

    The applications of Markov's processes theory in marketing are discussed. It was turned out that Markov's processes have wide field of applications. The advancement of marketing by the use of convolution of stationary Markov's distributions is analysed. It turned out that convolution distribution gives average net profit that is two times higher than the one obtained by usual Markov's distribution. It can be achieved if one selling chain is divided onto two parts with different ratios of output and input frequencies. The stability of marketing system was examined by the use of conforming coefficients. It was shown, by means of Jensen inequality that system remains stable if initial capital is higher than averaged losses.

  18. A Markov chain representation of the multiple testing problem.

    PubMed

    Cabras, Stefano

    2016-03-16

    The problem of multiple hypothesis testing can be represented as a Markov process where a new alternative hypothesis is accepted in accordance with its relative evidence to the currently accepted one. This virtual and not formally observed process provides the most probable set of non null hypotheses given the data; it plays the same role as Markov Chain Monte Carlo in approximating a posterior distribution. To apply this representation and obtain the posterior probabilities over all alternative hypotheses, it is enough to have, for each test, barely defined Bayes Factors, e.g. Bayes Factors obtained up to an unknown constant. Such Bayes Factors may either arise from using default and improper priors or from calibrating p-values with respect to their corresponding Bayes Factor lower bound. Both sources of evidence are used to form a Markov transition kernel on the space of hypotheses. The approach leads to easy interpretable results and involves very simple formulas suitable to analyze large datasets as those arising from gene expression data (microarray or RNA-seq experiments).

  19. Towards automatic Markov reliability modeling of computer architectures

    NASA Technical Reports Server (NTRS)

    Liceaga, C. A.; Siewiorek, D. P.

    1986-01-01

    The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.

  20. Markov Boundary Discovery with Ridge Regularized Linear Models

    PubMed Central

    Visweswaran, Shyam

    2016-01-01

    Ridge regularized linear models (RRLMs), such as ridge regression and the SVM, are a popular group of methods that are used in conjunction with coefficient hypothesis testing to discover explanatory variables with a significant multivariate association to a response. However, many investigators are reluctant to draw causal interpretations of the selected variables due to the incomplete knowledge of the capabilities of RRLMs in causal inference. Under reasonable assumptions, we show that a modified form of RRLMs can get “very close” to identifying a subset of the Markov boundary by providing a worst-case bound on the space of possible solutions. The results hold for any convex loss, even when the underlying functional relationship is nonlinear, and the solution is not unique. Our approach combines ideas in Markov boundary and sufficient dimension reduction theory. Experimental results show that the modified RRLMs are competitive against state-of-the-art algorithms in discovering part of the Markov boundary from gene expression data. PMID:27170915

  1. Mesoscale Approach to Feldspar Dissolution: Quantification of Dissolution Incongruency Based on Al/Si Ordering State

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Min, Y.; Jun, Y.

    2012-12-01

    structural components (e.g., Al-O-Si and Si-O-Si linkages) may serve as better base units than minerals (i.e., the pH dependence of Al-O-Si breakdown may be more useful than the pH dependence of Si release rate from any specific mineral). Second, Al/Si ordering is expected to show effects on the structure of interfacial layers formed during water-rock interactions, because from the mass-balance perspective, the interfacial layer is inherently related to dissolution incongruency due to the elemental reactivity differences. The fact that the incongruency is quantifiable using crystallographic parameters may suggest that the formation of the interfacial layer should at least be partially attributable to intrinsic non-stoichiometric dissolution (instead of secondary phase formation). Our approach provides a new means to connect atomic scale structural properties of a mineral to its macroscale dissolution behaviors.

  2. Efficient Markov Network Structure Discovery Using Independence Tests

    PubMed Central

    Bromberg, Facundo; Margaritis, Dimitris; Honavar, Vasant

    2011-01-01

    We present two algorithms for learning the structure of a Markov network from data: GSMN* and GSIMN. Both algorithms use statistical independence tests to infer the structure by successively constraining the set of structures consistent with the results of these tests. Until very recently, algorithms for structure learning were based on maximum likelihood estimation, which has been proved to be NP-hard for Markov networks due to the difficulty of estimating the parameters of the network, needed for the computation of the data likelihood. The independence-based approach does not require the computation of the likelihood, and thus both GSMN* and GSIMN can compute the structure efficiently (as shown in our experiments). GSMN* is an adaptation of the Grow-Shrink algorithm of Margaritis and Thrun for learning the structure of Bayesian networks. GSIMN extends GSMN* by additionally exploiting Pearl’s well-known properties of the conditional independence relation to infer novel independences from known ones, thus avoiding the performance of statistical tests to estimate them. To accomplish this efficiently GSIMN uses the Triangle theorem, also introduced in this work, which is a simplified version of the set of Markov axioms. Experimental comparisons on artificial and real-world data sets show GSIMN can yield significant savings with respect to GSMN*, while generating a Markov network with comparable or in some cases improved quality. We also compare GSIMN to a forward-chaining implementation, called GSIMN-FCH, that produces all possible conditional independences resulting from repeatedly applying Pearl’s theorems on the known conditional independence tests. The results of this comparison show that GSIMN, by the sole use of the Triangle theorem, is nearly optimal in terms of the set of independences tests that it infers. PMID:22822297

  3. Semi-Markov Unreliability Range Evaluator

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Semi-Markov Unreliability Range Evaluator, SURE, computer program is software tool for analysis of reconfigurable, fault-tolerant systems. Traditional reliability analyses based on aggregates of fault-handling and fault-occurrence models. SURE provides efficient means for calculating accurate upper and lower bounds for probabilities of death states for large class of semi-Markov mathematical models, and not merely those reduced to critical-pair architectures.

  4. Algorithms for Discovery of Multiple Markov Boundaries

    PubMed Central

    Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.

    2013-01-01

    Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052

  5. Relativized hierarchical decomposition of Markov decision processes.

    PubMed

    Ravindran, B

    2013-01-01

    Reinforcement Learning (RL) is a popular paradigm for sequential decision making under uncertainty. A typical RL algorithm operates with only limited knowledge of the environment and with limited feedback on the quality of the decisions. To operate effectively in complex environments, learning agents require the ability to form useful abstractions, that is, the ability to selectively ignore irrelevant details. It is difficult to derive a single representation that is useful for a large problem setting. In this chapter, we describe a hierarchical RL framework that incorporates an algebraic framework for modeling task-specific abstraction. The basic notion that we will explore is that of a homomorphism of a Markov Decision Process (MDP). We mention various extensions of the basic MDP homomorphism framework in order to accommodate different commonly understood notions of abstraction, namely, aspects of selective attention. Parts of the work described in this chapter have been reported earlier in several papers (Narayanmurthy and Ravindran, 2007, 2008; Ravindran and Barto, 2002, 2003a,b; Ravindran et al., 2007).

  6. Causal Latent Markov Model for the Comparison of Multiple Treatments in Observational Longitudinal Studies

    ERIC Educational Resources Information Center

    Bartolucci, Francesco; Pennoni, Fulvia; Vittadini, Giorgio

    2016-01-01

    We extend to the longitudinal setting a latent class approach that was recently introduced by Lanza, Coffman, and Xu to estimate the causal effect of a treatment. The proposed approach enables an evaluation of multiple treatment effects on subpopulations of individuals from a dynamic perspective, as it relies on a latent Markov (LM) model that is…

  7. Empirical Markov Chain Monte Carlo Bayesian analysis of fMRI data.

    PubMed

    de Pasquale, F; Del Gratta, C; Romani, G L

    2008-08-01

    In this work an Empirical Markov Chain Monte Carlo Bayesian approach to analyse fMRI data is proposed. The Bayesian framework is appealing since complex models can be adopted in the analysis both for the image and noise model. Here, the noise autocorrelation is taken into account by adopting an AutoRegressive model of order one and a versatile non-linear model is assumed for the task-related activation. Model parameters include the noise variance and autocorrelation, activation amplitudes and the hemodynamic response function parameters. These are estimated at each voxel from samples of the Posterior Distribution. Prior information is included by means of a 4D spatio-temporal model for the interaction between neighbouring voxels in space and time. The results show that this model can provide smooth estimates from low SNR data while important spatial structures in the data can be preserved. A simulation study is presented in which the accuracy and bias of the estimates are addressed. Furthermore, some results on convergence diagnostic of the adopted algorithm are presented. To validate the proposed approach a comparison of the results with those from a standard GLM analysis, spatial filtering techniques and a Variational Bayes approach is provided. This comparison shows that our approach outperforms the classical analysis and is consistent with other Bayesian techniques. This is investigated further by means of the Bayes Factors and the analysis of the residuals. The proposed approach applied to Blocked Design and Event Related datasets produced reliable maps of activation.

  8. Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)

    NASA Technical Reports Server (NTRS)

    Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV

    1988-01-01

    The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.

  9. Highly ordered nanocomposites via a monomer self-assembly in situ condensation approach

    DOEpatents

    Gin, D.L.; Fischer, W.M.; Gray, D.H.; Smith, R.C.

    1998-12-15

    A method for synthesizing composites with architectural control on the nanometer scale is described. A polymerizable lyotropic liquid-crystalline monomer is used to form an inverse hexagonal phase in the presence of a second polymer precursor solution. The monomer system acts as an organic template, providing the underlying matrix and order of the composite system. Polymerization of the template in the presence of an optional cross-linking agent with retention of the liquid-crystalline order is carried out followed by a second polymerization of the second polymer precursor within the channels of the polymer template to provide an ordered nanocomposite material. 13 figs.

  10. Higher Order Modeling in Hybrid Approaches to the Computation of Electromagnetic Fields

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Fink, Patrick W.; Graglia, Roberto D.

    2000-01-01

    Higher order geometry representations and interpolatory basis functions for computational electromagnetics are reviewed. Two types of vector-valued basis functions are described: curl-conforming bases, used primarily in finite element solutions, and divergence-conforming bases used primarily in integral equation formulations. Both sets satisfy Nedelec constraints, which optimally reduce the number of degrees of freedom required for a given order. Results are presented illustrating the improved accuracy and convergence properties of higher order representations for hybrid integral equation and finite element methods.

  11. New General Approach for Normally Ordering Coordinate-Momentum Operator Functions

    NASA Astrophysics Data System (ADS)

    Xu, Shi-Min; Xu, Xing-Lei; Li, Hong-Qi; Fan, Hong-Yi

    2016-12-01

    By virtue of integration technique within ordered product of operators and Dirac's representation theory we find a new general formula for normally ordering coordinate-momentum operator functions, that is f(ghat {{Q}}+hhat {P})= :exp [style {g2+h2 over 4}style {{partial 2} over {partial (ghat {{Q}}+hhat {P})2}}]f(ghat {{Q}}+hhat {P}):, where hat {Q} and hat {P} are the coordinate operator and momentum operator respectively, the symbol :: denotes normal ordering. Using this formula we can derive a series of new relations about Hermite polynomial and Laguerre polynomial, as well as some new differential relations.

  12. Highly ordered nanocomposites via a monomer self-assembly in situ condensation approach

    DOEpatents

    Gin, Douglas L.; Fischer, Walter M.; Gray, David H.; Smith, Ryan C.

    1998-01-01

    A method for synthesizing composites with architectural control on the nanometer scale is described. A polymerizable lyotropic liquid-crystalline monomer is used to form an inverse hexagonal phase in the presence of a second polymer precursor solution. The monomer system acts as an organic template, providing the underlying matrix and order of the composite system. Polymerization of the template in the presence of an optional cross-linking agent with retention of the liquid-crystalline order is carried out followed by a second polymerization of the second polymer precursor within the channels of the polymer template to provide an ordered nanocomposite material.

  13. Stratification of the phase clouds and statistical effects of the non-Markovity in chaotic time series of human gait for healthy people and Parkinson patients

    NASA Astrophysics Data System (ADS)

    Yulmetyev, Renat; Demin, Sergey; Emelyanova, Natalya; Gafarov, Fail; Hänggi, Peter

    2003-03-01

    In this work we develop a new method of diagnosing the nervous system diseases and a new approach in studying human gait dynamics with the help of the theory of discrete non-Markov random processes (Phys. Rev. E 62 (5) (2000) 6178, Phys. Rev. E 64 (2001) 066132, Phys. Rev. E 65 (2002) 046107, Physica A 303 (2002) 427). The stratification of the phase clouds and the statistical non-Markov effects in the time series of the dynamics of human gait are considered. We carried out the comparative analysis of the data of four age groups of healthy people: children (from 3 to 10 year olds), teenagers (from 11 to 14 year olds), young people (from 21 up to 29 year olds), elderly persons (from 71 to 77 year olds) and Parkinson patients. The full data set are analyzed with the help of the phase portraits of the four dynamic variables, the power spectra of the initial time correlation function and the memory functions of junior orders, the three first points in the spectra of the statistical non-Markov parameter. The received results allow to define the predisposition of the probationers to deflections in the central nervous system caused by Parkinson's disease. We have found out distinct differences between the five submitted groups. On this basis we offer a new method of diagnostics and forecasting Parkinson's disease.

  14. A novel approach toward fuzzy generalized bi-ideals in ordered semigroups.

    PubMed

    Khan, Faiz Muhammad; Sarmin, Nor Haniza; Khan, Hidayat Ullah

    2014-01-01

    In several advanced fields like control engineering, computer science, fuzzy automata, finite state machine, and error correcting codes, the use of fuzzified algebraic structures especially ordered semigroups plays a central role. In this paper, we introduced a new and advanced generalization of fuzzy generalized bi-ideals of ordered semigroups. These new concepts are supported by suitable examples. These new notions are the generalizations of ordinary fuzzy generalized bi-ideals of ordered semigroups. Several fundamental theorems of ordered semigroups are investigated by the properties of these newly defined fuzzy generalized bi-ideals. Further, using level sets, ordinary fuzzy generalized bi-ideals are linked with these newly defined ideals which is the most significant part of this paper.

  15. Verbal Working Memory and Language Production: Common Approaches to the Serial Ordering of Verbal Information

    PubMed Central

    Acheson, Daniel J.; MacDonald, Maryellen C.

    2010-01-01

    Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domain-specific mechanisms for serial ordering in verbal WM are provided by the language production architecture, in which positional, lexical, and phonological similarity constraints are highly similar to those identified in the WM literature. These behavioral similarities are paralleled in computational modeling of serial ordering in both fields. The role of long-term learning in serial ordering performance is emphasized, in contrast to some models of verbal WM. Classic WM findings are discussed in terms of the language production architecture. The integration of principles from both fields illuminates the maintenance and ordering mechanisms for verbal information. PMID:19210053

  16. New approach for anti-normally and normally ordering bosonic-operator functions in quantum optics

    NASA Astrophysics Data System (ADS)

    Xu, Shi-Min; Zhang, Yun-Hai; Xu, Xing-Lei; Li, Hong-Qi; Wang, Ji-Suo

    2016-12-01

    In this paper, we provide a new kind of operator formula for anti-normally and normally ordering bosonic-operator functions in quantum optics, which can help us arrange a bosonic-operator function f(λQ̂ + νP̂) in its anti-normal and normal ordering conveniently. Furthermore, mutual transformation formulas between anti-normal ordering and normal ordering, which have good universality, are derived too. Based on these operator formulas, some new differential relations and some useful mathematical integral formulas are easily derived without really performing these integrations. Project supported by the Natural Science Foundation of Shandong Province, China (Grant No. ZR2015AM025) and the Natural Science Foundation of Heze University, China (Grant No. XY14PY02).

  17. Prescribed performance synchronization controller design of fractional-order chaotic systems: An adaptive neural network control approach

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Lv, Hui; Jiao, Dongxiu

    2017-03-01

    In this study, an adaptive neural network synchronization (NNS) approach, capable of guaranteeing prescribed performance (PP), is designed for non-identical fractional-order chaotic systems (FOCSs). For PP synchronization, we mean that the synchronization error converges to an arbitrary small region of the origin with convergence rate greater than some function given in advance. Neural networks are utilized to estimate unknown nonlinear functions in the closed-loop system. Based on the integer-order Lyapunov stability theorem, a fractional-order adaptive NNS controller is designed, and the PP can be guaranteed. Finally, simulation results are presented to confirm our results.

  18. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  19. Closed-form solution for loop transfer recovery via reduced-order observers

    NASA Technical Reports Server (NTRS)

    Bacon, Barton J.

    1989-01-01

    A well-known property of the reduced-order observer is exploited to obtain the controller solution of the loop transfer recovery problem. In that problem, the controller is sought that generates some desired loop shape at the plant's input or output channels. Past approaches to this problem have typically yielded controllers generating loop shapes that only converge pointwise to the desired loop shape. In the proposed approach, however, the solution (at the input) is obtained directly when the plant's first Markov parameter is full rank. In the more general case when the plant's first Markov parameter is not full rank, the solution is obtained in an analogous manner by appending a special set of input and output signals to the original set. A dual form of the reduced-order observer is shown to yield the LTR solution at the output channel.

  20. Handling target obscuration through Markov chain observations

    NASA Astrophysics Data System (ADS)

    Kouritzin, Michael A.; Wu, Biao

    2008-04-01

    Target Obscuration, including foliage or building obscuration of ground targets and landscape or horizon obscuration of airborne targets, plagues many real world filtering problems. In particular, ground moving target identification Doppler radar, mounted on a surveillance aircraft or unattended airborne vehicle, is used to detect motion consistent with targets of interest. However, these targets try to obscure themselves (at least partially) by, for example, traveling along the edge of a forest or around buildings. This has the effect of creating random blockages in the Doppler radar image that move dynamically and somewhat randomly through this image. Herein, we address tracking problems with target obscuration by building memory into the observations, eschewing the usual corrupted, distorted partial measurement assumptions of filtering in favor of dynamic Markov chain assumptions. In particular, we assume the observations are a Markov chain whose transition probabilities depend upon the signal. The state of the observation Markov chain attempts to depict the current obscuration and the Markov chain dynamics are used to handle the evolution of the partially obscured radar image. Modifications of the classical filtering equations that allow observation memory (in the form of a Markov chain) are given. We use particle filters to estimate the position of the moving targets. Moreover, positive proof-of-concept simulations are included.

  1. Extremely efficient and deterministic approach to generating optimal ordering of diffusion MRI measurements

    PubMed Central

    Koay, Cheng Guan; Hurley, Samuel A.; Meyerand, M. Elizabeth

    2011-01-01

    Purpose: Diffusion MRI measurements are typically acquired sequentially with unit gradient directions that are distributed uniformly on the unit sphere. The ordering of the gradient directions has significant effect on the quality of dMRI-derived quantities. Even though several methods have been proposed to generate optimal orderings of gradient directions, these methods are not widely used in clinical studies because of the two major problems. The first problem is that the existing methods for generating highly uniform and antipodally symmetric gradient directions are inefficient. The second problem is that the existing methods for generating optimal orderings of gradient directions are also highly inefficient. In this work, the authors propose two extremely efficient and deterministic methods to solve these two problems. Methods: The method for generating nearly uniform point set on the unit sphere (with antipodal symmetry) is based upon the notion that the spacing between two consecutive points on the same latitude should be equal to the spacing between two consecutive latitudes. The method for generating optimal ordering of diffusion gradient directions is based on the idea that each subset of incremental sample size, which is derived from the prescribed and full set of gradient directions, must be as uniform as possible in terms of the modified electrostatic energy designed for antipodally symmetric point set. Results: The proposed method outperformed the state-of-the-art method in terms of computational efficiency by about six orders of magnitude. Conclusions: Two extremely efficient and deterministic methods have been developed for solving the problem of optimal ordering of diffusion gradient directions. The proposed strategy is also applicable to optimal view-ordering in three-dimensional radial MRI. PMID:21928652

  2. A multilevel approach to the relationship between birth order and intelligence.

    PubMed

    Wichman, Aaron L; Rodgers, Joseph Lee; MacCallum, Robert C

    2006-01-01

    Many studies show relationships between birth order and intelligence but use cross-sectional designs or manifest other threats to internal validity. Multilevel analyses with a control variable show that when these threats are removed, two major results emerge: (a) birth order has no significant influence on children's intelligence and (b) earlier reported birth order effects on intelligence are attributable to factors that vary between, not within, families. Analyses on 7- to 8 - and 13- to 14-year-old children from the National Longitudinal Survey of Youth support these conclusions. When hierarchical data structures, age variance of children, and within-family versus between-family variance sources are taken into account, previous research is seen in a new light.

  3. A First and Second Order Moment Approach to Probabilistic Control Synthesis

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology based on the estimation of the first two order moments of the random variables and processes that describe the controlled response. Synthesis is performed by solving an multi-objective optimization problem where stability and performance requirements in time- and frequency domains are integrated. The use of the first two order moments allows for the efficient estimation of the cost function thus for a faster synthesis algorithm. While reliability requirements are taken into account by using bounds to failure probabilities, requirements related to undesirable variability are implemented by quantifying the concentration of the random outcome about a deterministic target. The Hammersley Sequence Sampling and the First- and Second-Moment- Second-Order approximations are used to estimate the moments, whose accuracy and associated computational complexity are compared numerically. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  4. Approach to first-order exact solutions of the Ablowitz-Ladik equation.

    PubMed

    Ankiewicz, Adrian; Akhmediev, Nail; Lederer, Falk

    2011-05-01

    We derive exact solutions of the Ablowitz-Ladik (A-L) equation using a special ansatz that linearly relates the real and imaginary parts of the complex function. This ansatz allows us to derive a family of first-order solutions of the A-L equation with two independent parameters. This novel technique shows that every exact solution of the A-L equation has a direct analog among first-order solutions of the nonlinear Schrödinger equation (NLSE).

  5. Analytical and numerical studies on differences between Lagrangian and Hamiltonian approaches at the same post-Newtonian order

    NASA Astrophysics Data System (ADS)

    Wu, Xin; Mei, Lijie; Huang, Guoqing; Liu, Sanqiu

    2015-01-01

    In general, there are differences between Lagrangian and Hamiltonian approaches at the same post-Newtonian (PN) order in a coordinate system under a coordinate gauge. They are from truncation of higher-order PN terms. They do not affect qualitative and quantitative results of the two approaches for a weak gravitational system such as the Solar System. Nevertheless, they may make the two approaches have somewhat or completely different dynamical qualitative features of integrability and nonintegrability (or order and chaos) for a strong gravitational field. Even if the two approaches have the same qualitative features, they have different quantitative results when the distances among compact objects are appropriately small. For a relativistic circular restricted three-body problem with the 1PN contribution from the circular motion of the primaries, although the two 1PN Lagrangian and Hamiltonian approaches are nonintegrable, their dynamics are somewhat nonequivalent for a small quantity of separations between the primaries when the initial conditions and other parameters are given. Particularly for comparable mass compact binaries with two arbitrary spins and spin effects restricted to the leading-order spin-orbit interaction, as an important example of extremely strong gravitational fields, the 2PN Arnowitt-Deser-Misner Lagrangian formulation is always nonintegrable and can be chaotic under some appropriate conditions because its equivalent higher-order PN canonical Hamiltonian includes many spin-spin couplings resulting in the absence of a fifth integral in a ten-dimensional phase space and is not integrable. However, the 2PN Arnowitt-Deser-Misner Hamiltonian is integrable and nonchaotic due to the presence of five constants of motion in the ten-dimensional phase space.

  6. Operations and Maintenance Task Order (OMTO)/Southern Border Initiative (SBInet) Supply Chain Approach

    DTIC Science & Technology

    2012-01-01

    Initiative Network (SBInet) Supply Chain approach in the areas of lead times between repairs, spares inventory, and the identification of failure trends...The availability rate of the platform(s) needed to be improved because the current supply chain process enacted by the government did not work in a

  7. The interacting gaps model: reconciling theoretical and numerical approaches to limit-order models

    NASA Astrophysics Data System (ADS)

    Muchnik, Lev; Slanina, Frantisek; Solomon, Sorin

    2003-12-01

    We consider the emergence of power-law tails in the returns distribution of limit-order driven markets. We explain a previously observed clash between the theoretical and numerical studies of such models. We introduce a solvable model that interpolates between the previous studies and agrees with each of them in the relevant limit.

  8. Does Higher-Order Thinking Impinge on Learner-Centric Digital Approach?

    ERIC Educational Resources Information Center

    Mathew, Bincy; Raja, B. William Dharma

    2015-01-01

    Humans are social beings and the social cognition focuses on how one form impressions of other people, interpret the meaning of other people's behaviour and how people's behaviour is affected by our attitudes. The school provides complex social situations and in order to thrive, students must possess social cognition, the process of thinking about…

  9. A Model-Based Approach for Visualizing the Dimensional Structure of Ordered Successive Categories Preference Data

    ERIC Educational Resources Information Center

    DeSarbo, Wayne S.; Park, Joonwook; Scott, Crystal J.

    2008-01-01

    A cyclical conditional maximum likelihood estimation procedure is developed for the multidimensional unfolding of two- or three-way dominance data (e.g., preference, choice, consideration) measured on ordered successive category rating scales. The technical description of the proposed model and estimation procedure are discussed, as well as the…

  10. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  11. An ordered-patch-based image classification approach on the image Grassmannian manifold.

    PubMed

    Xu, Chunyan; Wang, Tianjiang; Gao, Junbin; Cao, Shougang; Tao, Wenbing; Liu, Fang

    2014-04-01

    This paper presents an ordered-patch-based image classification framework integrating the image Grassmannian manifold to address handwritten digit recognition, face recognition, and scene recognition problems. Typical image classification methods explore image appearances without considering the spatial causality among distinctive domains in an image. To address the issue, we introduce an ordered-patch-based image representation and use the autoregressive moving average (ARMA) model to characterize the representation. First, each image is encoded as a sequence of ordered patches, integrating both the local appearance information and spatial relationships of the image. Second, the sequence of these ordered patches is described by an ARMA model, which can be further identified as a point on the image Grassmannian manifold. Then, image classification can be conducted on such a manifold under this manifold representation. Furthermore, an appropriate Grassmannian kernel for support vector machine classification is developed based on a distance metric of the image Grassmannian manifold. Finally, the experiments are conducted on several image data sets to demonstrate that the proposed algorithm outperforms other existing image classification methods.

  12. Verbal Working Memory and Language Production: Common Approaches to the Serial Ordering of Verbal Information

    ERIC Educational Resources Information Center

    Acheson, Daniel J.; MacDonald, Maryellen C.

    2009-01-01

    Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domain-specific mechanisms for serial ordering in verbal WM are provided by…

  13. Markov chains for testing redundant software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  14. A conformal mapping based fractional order approach for sub-optimal tuning of PID controllers with guaranteed dominant pole placement

    NASA Astrophysics Data System (ADS)

    Saha, Suman; Das, Saptarshi; Das, Shantanu; Gupta, Amitava

    2012-09-01

    A novel conformal mapping based fractional order (FO) methodology is developed in this paper for tuning existing classical (Integer Order) Proportional Integral Derivative (PID) controllers especially for sluggish and oscillatory second order systems. The conventional pole placement tuning via Linear Quadratic Regulator (LQR) method is extended for open loop oscillatory systems as well. The locations of the open loop zeros of a fractional order PID (FOPID or PIλDμ) controller have been approximated in this paper vis-à-vis a LQR tuned conventional integer order PID controller, to achieve equivalent integer order PID control system. This approach eases the implementation of analog/digital realization of a FOPID controller with its integer order counterpart along with the advantages of fractional order controller preserved. It is shown here in the paper that decrease in the integro-differential operators of the FOPID/PIλDμ controller pushes the open loop zeros of the equivalent PID controller towards greater damping regions which gives a trajectory of the controller zeros and dominant closed loop poles. This trajectory is termed as "M-curve". This phenomena is used to design a two-stage tuning algorithm which reduces the existing PID controller's effort in a significant manner compared to that with a single stage LQR based pole placement method at a desired closed loop damping and frequency.

  15. On Markov Earth Mover’s Distance

    PubMed Central

    Wei, Jie

    2015-01-01

    In statistics, pattern recognition and signal processing, it is of utmost importance to have an effective and efficient distance to measure the similarity between two distributions and sequences. In statistics this is referred to as goodness-of-fit problem. Two leading goodness of fit methods are chi-square and Kolmogorov–Smirnov distances. The strictly localized nature of these two measures hinders their practical utilities in patterns and signals where the sample size is usually small. In view of this problem Rubner and colleagues developed the earth mover’s distance (EMD) to allow for cross-bin moves in evaluating the distance between two patterns, which find a broad spectrum of applications. EMD-L1 was later proposed to reduce the time complexity of EMD from super-cubic by one order of magnitude by exploiting the special L1 metric. EMD-hat was developed to turn the global EMD to a localized one by discarding long-distance earth movements. In this work, we introduce a Markov EMD (MEMD) by treating the source and destination nodes absolutely symmetrically. In MEMD, like hat-EMD, the earth is only moved locally as dictated by the degree d of neighborhood system. Nodes that cannot be matched locally is handled by dummy source and destination nodes. By use of this localized network structure, a greedy algorithm that is linear to the degree d and number of nodes is then developed to evaluate the MEMD. Empirical studies on the use of MEMD on deterministic and statistical synthetic sequences and SIFT-based image retrieval suggested encouraging performances. PMID:25983362

  16. Protein family classification using sparse Markov transducers.

    PubMed

    Eskin, E; Grundy, W N; Singer, Y

    2000-01-01

    In this paper we present a method for classifying proteins into families using sparse Markov transducers (SMTs). Sparse Markov transducers, similar to probabilistic suffix trees, estimate a probability distribution conditioned on an input sequence. SMTs generalize probabilistic suffix trees by allowing for wild-cards in the conditioning sequences. Because substitutions of amino acids are common in protein families, incorporating wildcards into the model significantly improves classification performance. We present two models for building protein family classifiers using SMTs. We also present efficient data structures to improve the memory usage of the models. We evaluate SMTs by building protein family classifiers using the Pfam database and compare our results to previously published results.

  17. Entropy production fluctuations of finite Markov chains

    NASA Astrophysics Data System (ADS)

    Jiang, Da-Quan; Qian, Min; Zhang, Fu-Xi

    2003-09-01

    For almost every trajectory segment over a finite time span of a finite Markov chain with any given initial distribution, the logarithm of the ratio of its probability to that of its time-reversal converges exponentially to the entropy production rate of the Markov chain. The large deviation rate function has a symmetry of Gallavotti-Cohen type, which is called the fluctuation theorem. Moreover, similar symmetries also hold for the rate functions of the joint distributions of general observables and the logarithmic probability ratio.

  18. Parallel Markov chain Monte Carlo simulations.

    PubMed

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  19. Parallel Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Ren, Ruichao; Orkoulas, G.

    2007-06-01

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  20. Dynamic order reduction of thin-film deposition kinetics models: A reaction factorization approach

    SciTech Connect

    Adomaitis, Raymond A.

    2016-01-15

    A set of numerical tools for the analysis and dynamic dimension reduction of chemical vapor and atomic layer deposition (ALD) surface reaction models is developed in this work. The approach is based on a two-step process where in the first, the chemical species surface balance dynamic equations are factored to effectively decouple the (nonlinear) reaction rates, a process that eliminates redundant dynamic modes and that identifies conserved quantities. If successful, the second phase is implemented to factor out redundant dynamic modes when species relatively minor in concentration are omitted; if unsuccessful, the technique points to potential model structural problems. An alumina ALD process is used for an example consisting of 19 reactions and 23 surface and gas-phase species. Using the approach developed, the model is reduced by nineteen modes to a four-dimensional dynamic system without any knowledge of the reaction rate values. Results are interpreted in the context of potential model validation studies.

  1. New approach to the first-order phase transition of Lennard-Jones fluids.

    PubMed

    Muguruma, Chizuru; Okamoto, Yuko; Mikami, Masuhiro

    2004-04-22

    The multicanonical Monte Carlo method is applied to a bulk Lennard-Jones fluid system to investigate the liquid-solid phase transition. We take the example of a system of 108 argon particles. The multicanonical weight factor we determined turned out to be reliable for the energy range between -7.0 and -4.0 kJ/mol, which corresponds to the temperature range between 60 and 250 K. The expectation values of the thermodynamic quantities obtained from the multicanonical production run by the reweighting techniques exhibit the characteristics of first-order phase transitions between liquid and solid states around 150 K. The present study reveals that the multicanonical algorithm is particularly suitable for analyzing the transition state of the first-order phase transition in detail.

  2. Birth order effects on the separation process in young adults: an evolutionary and dynamic approach.

    PubMed

    Ziv, Ido; Hermel, Orly

    2011-01-01

    The present study analyzes the differential contribution of a familial or social focus in imaginative ideation (the personal fable and imagined audience mental constructs) to the separation-individuation process of firstborn, middleborn, and lastborn children. A total of 160 young adults were divided into 3 groups by birth order. Participants' separation-individuation process was evaluated by the Psychological Separation Inventory, and results were cross-validated by the Pathology of Separation-Individuation Inventory. The Imaginative Ideation Inventory tested the relative dominance of the familial and social environments in participants' mental constructs. The findings showed that middleborn children had attained more advanced separation and were lower in family-focused ideation and higher in nonfamilial social ideation. However, the familial and not the social ideation explained the variance in the separation process in all the groups. The findings offer new insights into the effects of birth order on separation and individuation in adolescents and young adults.

  3. An overset mesh approach for 3D mixed element high-order discretizations

    NASA Astrophysics Data System (ADS)

    Brazell, Michael J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.

    2016-10-01

    A parallel high-order Discontinuous Galerkin (DG) method is used to solve the compressible Navier-Stokes equations in an overset mesh framework. The DG solver has many capabilities including: hp-adaption, curved cells, support for hybrid, mixed-element meshes, and moving meshes. Combining these capabilities with overset grids allows the DG solver to be used in problems with bodies in relative motion and in a near-body off-body solver strategy. The overset implementation is constructed to preserve the design accuracy of the baseline DG discretization. Multiple simulations are carried out to validate the accuracy and performance of the overset DG solver. These simulations demonstrate the capability of the high-order DG solver to handle complex geometry and large scale parallel simulations in an overset framework.

  4. Universal order parameters and quantum phase transitions: a finite-size approach.

    PubMed

    Shi, Qian-Qian; Zhou, Huan-Qiang; Batchelor, Murray T

    2015-01-08

    We propose a method to construct universal order parameters for quantum phase transitions in many-body lattice systems. The method exploits the H-orthogonality of a few near-degenerate lowest states of the Hamiltonian describing a given finite-size system, which makes it possible to perform finite-size scaling and take full advantage of currently available numerical algorithms. An explicit connection is established between the fidelity per site between two H-orthogonal states and the energy gap between the ground state and low-lying excited states in the finite-size system. The physical information encoded in this gap arising from finite-size fluctuations clarifies the origin of the universal order parameter. We demonstrate the procedure for the one-dimensional quantum formulation of the q-state Potts model, for q = 2, 3, 4 and 5, as prototypical examples, using finite-size data obtained from the density matrix renormalization group algorithm.

  5. A High Order Multi-Scale Numerical Approach for Kinetic Simulations

    DTIC Science & Technology

    2015-08-27

    Truly multi-scale and multi-dimensional approaches have been developed for the ki- netic simulations with potential applications to plasma physics and...eight papers published in Journal of Computational Physics , two papers published in Journal of Scientic Computing, one paper published in SIAM Journal...been developed for the ki- netic simulations with potential applications to plasma physics and rarefied gas dy- namics. During 2012-2015, the PI and

  6. Multi-omics approach for estimating metabolic networks using low-order partial correlations.

    PubMed

    Kayano, Mitsunori; Imoto, Seiya; Yamaguchi, Rui; Miyano, Satoru

    2013-08-01

    Two typical purposes of metabolome analysis are to estimate metabolic pathways and to understand the regulatory systems underlying the metabolism. A powerful source of information for these analyses is a set of multi-omics data for RNA, proteins, and metabolites. However, integrated methods that analyze multi-omics data simultaneously and unravel the systems behind metabolisms have not been well established. We developed a statistical method based on low-order partial correlations with a robust correlation coefficient for estimating metabolic networks from metabolome, proteome, and transcriptome data. Our method is defined by the maximum of low-order, particularly first-order, partial correlations (MF-PCor) in order to assign a correct edge with the highest correlation and to detect the factors that strongly affect the correlation coefficient. First, through numerical experiments with real and synthetic data, we showed that the use of protein and transcript data of enzymes improved the accuracy of the estimated metabolic networks in MF-PCor. In these experiments, the effectiveness of the proposed method was also demonstrated by comparison with a correlation network (Cor) and a Gaussian graphical model (GGM). Our theoretical investigation confirmed that the performance of MF-PCor could be superior to that of the competing methods. In addition, in the real data analysis, we investigated the role of metabolites, enzymes, and enzyme genes that were identified as important factors in the network established by MF-PCor. We then found that some of them corresponded to specific reactions between metabolites mediated by catalytic enzymes that were difficult to be identified by analysis based on metabolite data alone.

  7. Make-to-order manufacturing - new approach to management of manufacturing processes

    NASA Astrophysics Data System (ADS)

    Saniuk, A.; Waszkowski, R.

    2016-08-01

    Strategic management must now be closely linked to the management at the operational level, because only in such a situation the company can be flexible and can quickly respond to emerging opportunities and pursue ever-changing strategic objectives. In these conditions industrial enterprises seek constantly new methods, tools and solutions which help to achieve competitive advantage. They are beginning to pay more attention to cost management, economic effectiveness and performance of business processes. In the article characteristics of make-to-order systems (MTO) and needs associated with managing such systems is identified based on the literature analysis. The main aim of this article is to present the results of research related to the development of a new solution dedicated to small and medium enterprises manufacture products solely on the basis of production orders (make-to- order systems). A set of indicators to enable continuous monitoring and control of key strategic areas this type of company is proposed. A presented solution includes the main assumptions of the following concepts: the Performance Management (PM), the Balanced Scorecard (BSC) and a combination of strategic management with the implementation of operational management. The main benefits of proposed solution are to increase effectiveness of MTO manufacturing company management.

  8. MARKOV Model Application to Proliferation Risk Reduction of an Advanced Nuclear System

    SciTech Connect

    Bari,R.A.

    2008-07-13

    The Generation IV International Forum (GIF) emphasizes proliferation resistance and physical protection (PR&PP) as a main goal for future nuclear energy systems. The GIF PR&PP Working Group has developed a methodology for the evaluation of these systems. As an application of the methodology, Markov model has been developed for the evaluation of proliferation resistance and is demonstrated for a hypothetical Example Sodium Fast Reactor (ESFR) system. This paper presents the case of diversion by the facility owner/operator to obtain material that could be used in a nuclear weapon. The Markov model is applied to evaluate material diversion strategies. The following features of the Markov model are presented here: (1) An effective detection rate has been introduced to account for the implementation of multiple safeguards approaches at a given strategic point; (2) Technical failure to divert material is modeled as intrinsic barriers related to the design of the facility or the properties of the material in the facility; and (3) Concealment to defeat or degrade the performance of safeguards is recognized in the Markov model. Three proliferation risk measures are calculated directly by the Markov model: the detection probability, technical failure probability, and proliferation time. The material type is indicated by an index that is based on the quality of material diverted. Sensitivity cases have been done to demonstrate the effects of different modeling features on the measures of proliferation resistance.

  9. Prediction of User's Web-Browsing Behavior: Application of Markov Model.

    PubMed

    Awad, M A; Khalil, I

    2012-08-01

    Web prediction is a classification problem in which we attempt to predict the next set of Web pages that a user may visit based on the knowledge of the previously visited pages. Predicting user's behavior while serving the Internet can be applied effectively in various critical applications. Such application has traditional tradeoffs between modeling complexity and prediction accuracy. In this paper, we analyze and study Markov model and all- Kth Markov model in Web prediction. We propose a new modified Markov model to alleviate the issue of scalability in the number of paths. In addition, we present a new two-tier prediction framework that creates an example classifier EC, based on the training examples and the generated classifiers. We show that such framework can improve the prediction time without compromising prediction accuracy. We have used standard benchmark data sets to analyze, compare, and demonstrate the effectiveness of our techniques using variations of Markov models and association rule mining. Our experiments show the effectiveness of our modified Markov model in reducing the number of paths without compromising accuracy. Additionally, the results support our analysis conclusions that accuracy improves with higher orders of all- Kth model.

  10. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    PubMed

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  11. A robust hidden semi-Markov model with application to aCGH data processing.

    PubMed

    Ding, Jiarui; Shah, Sohrab

    2013-01-01

    Hidden semi-Markov models are effective at modelling sequences with succession of homogenous zones by choosing appropriate state duration distributions. To compensate for model mis-specification and provide protection against outliers, we design a robust hidden semi-Markov model with Student's t mixture models as the emission distributions. The proposed approach is used to model array based comparative genomic hybridization data. Experiments conducted on the benchmark data from the Coriell cell lines, and glioblastoma multiforme data illustrate the reliability of the technique.

  12. Nonparametric model validations for hidden Markov models with applications in financial econometrics

    PubMed Central

    Zhao, Zhibiao

    2011-01-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise. PMID:21750601

  13. ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Johnson, S. C.

    1994-01-01

    for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a

  14. ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Johnson, S. C.

    1994-01-01

    for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a

  15. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  16. Incorporation of Markov reliability models for digital instrumentation and control systems into existing PRAs

    SciTech Connect

    Bucci, P.; Mangan, L. A.; Kirschenbaum, J.; Mandelli, D.; Aldemir, T.; Arndt, S. A.

    2006-07-01

    Markov models have the ability to capture the statistical dependence between failure events that can arise in the presence of complex dynamic interactions between components of digital instrumentation and control systems. One obstacle to the use of such models in an existing probabilistic risk assessment (PRA) is that most of the currently available PRA software is based on the static event-tree/fault-tree methodology which often cannot represent such interactions. We present an approach to the integration of Markov reliability models into existing PRAs by describing the Markov model of a digital steam generator feedwater level control system, how dynamic event trees (DETs) can be generated from the model, and how the DETs can be incorporated into an existing PRA with the SAPHIRE software. (authors)

  17. Modeling the atmospheric convective boundary layer within a zero-order jump approach: An extended theoretical framework

    SciTech Connect

    Fedorovich, E.

    1995-09-01

    The paper presents an extended theoretical background for applied modeling of the atmospheric convective boundary layer within the so-called zero-order jump approach, which implies vertical homogeneity of meteorological fields in the bulk of convective boundary layer (CBL) and zero-order discontinuities of variables at the interfaces of the layer. The zero-order jump model equations for the most typical cases of CBL are derived. The models of nonsteady, horizontally homogeneous CBL with and without shear, extensively studied in the past with the aid of zero-order jump models, are shown to be particular cases of the general zero-order jump theoretical framework. The integral budgets of momentum and heat are considered for different types of dry CBL. The profiles of vertical turbulent fluxes are presented and analyzed. The general version of the equation of CBL depth growth rate (entrainment rate equation) is obtained by the integration of the turbulence kinetic energy balance equation, invoking basic assumptions of the zero-order parameterizations of the CBL vertical structure. The problems of parameterizing the turbulence vertical structure and closure of the entrainment rate equation for specific cases of CBL are discussed. A parameterization scheme for the horizontal turbulent exchange in zero-order jump models of CBL is proposed. The developed theory is generalized for the case of CBL over irregular terrain. 28 refs., 2 figs.

  18. Markov chains at the interface of combinatorics, computing, and statistical physics

    NASA Astrophysics Data System (ADS)

    Streib, Amanda Pascoe

    The fields of statistical physics, discrete probability, combinatorics, and theoretical computer science have converged around efforts to understand random structures and algorithms. Recent activity in the interface of these fields has enabled tremendous breakthroughs in each domain and has supplied a new set of techniques for researchers approaching related problems. This thesis makes progress on several problems in this interface whose solutions all build on insights from multiple disciplinary perspectives. First, we consider a dynamic growth process arising in the context of DNA-based self-assembly. The assembly process can be modeled as a simple Markov chain. We prove that the chain is rapidly mixing for large enough bias in regions of Zd. The proof uses a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile, which arises in the nanotechnology application. Moreover, we use intuition from statistical physics to construct a choice of the biases for which the Markov chain Mmon requires exponential time to converge. Second, we consider a related problem regarding the convergence rate of biased permutations that arises in the context of self-organizing lists. The Markov chain Mnn in this case is a nearest-neighbor chain that allows adjacent transpositions, and the rate of these exchanges is governed by various input parameters. It was conjectured that the chain is always rapidly mixing when the inversion probabilities are positively biased, i.e., we put nearest neighbor pair x < y in order with bias 1/2 ≤ pxy ≤ 1 and out of order with bias 1 - pxy. The Markov chain Mmon was known to have connections to a simplified version of this biased card-shuffling. We provide new connections between Mnn and Mmon by using simple combinatorial bijections, and we prove that Mnn is

  19. Obesity status transitions across the elementary years: Use of Markov chain modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Overweight and obesity status transition probabilities using first-order Markov transition models applied to elementary school children were assessed. Complete longitudinal data across eleven assessments were available from 1,494 elementary school children (from 7,599 students in 41 out of 45 school...

  20. Markov Modeling of Component Fault Growth over a Derived Domain of Feasible Output Control Effort Modifications

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.

  1. General approach for studying first-order phase transitions at low temperatures.

    PubMed

    Fiore, C E; da Luz, M G E

    2011-12-02

    By combining different ideas, a general and efficient protocol to deal with discontinuous phase transitions at low temperatures is proposed. For small T's, it is possible to derive a generic analytic expression for appropriate order parameters, whose coefficients are obtained from simple simulations. Once in such regimes simulations by standard algorithms are not reliable; an enhanced tempering method, the parallel tempering-accurate for small and intermediate system sizes with rather low computational cost-is used. Finally, from finite size analysis, one can obtain the thermodynamic limit. The procedure is illustrated for four distinct models, demonstrating its power, e.g., to locate coexistence lines and the phase density at the coexistence.

  2. A Type-Theoretic Approach to Higher-Order Modules with Sharing

    DTIC Science & Technology

    1993-10-01

    be easily restricted to " second -class" modules found in ML-like languages. 7? .:.!. . . - . . . U MInC QUALITY IS ZTJ3 r L a!1.,j1 Oris t HI aSe idl 1...If run-time selection is not used, modules behave exactly as they would in a more familiar "’ second -class’ module system such as is found in SML...on Girard’s F, [14] in much the same way that many systems are based on the second - order lambda calculus (F𔃼 ). That is to say, our system can be

  3. Markov Jump Linear Systems-Based Position Estimation for Lower Limb Exoskeletons

    PubMed Central

    Nogueira, Samuel L.; Siqueira, Adriano A. G.; Inoue, Roberto S.; Terra, Marco H.

    2014-01-01

    In this paper, we deal with Markov Jump Linear Systems-based filtering applied to robotic rehabilitation. The angular positions of an impedance-controlled exoskeleton, designed to help stroke and spinal cord injured patients during walking rehabilitation, are estimated. Standard position estimate approaches adopt Kalman filters (KF) to improve the performance of inertial measurement units (IMUs) based on individual link configurations. Consequently, for a multi-body system, like a lower limb exoskeleton, the inertial measurements of one link (e.g., the shank) are not taken into account in other link position estimation (e.g., the foot). In this paper, we propose a collective modeling of all inertial sensors attached to the exoskeleton, combining them in a Markovian estimation model in order to get the best information from each sensor. In order to demonstrate the effectiveness of our approach, simulation results regarding a set of human footsteps, with four IMUs and three encoders attached to the lower limb exoskeleton, are presented. A comparative study between the Markovian estimation system and the standard one is performed considering a wide range of parametric uncertainties. PMID:24451469

  4. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution

    PubMed Central

    Djordjevic, Ivan B.

    2015-01-01

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually

  5. Fight deck human-automation mode confusion detection using a generalized fuzzy hidden Markov model

    NASA Astrophysics Data System (ADS)

    Lyu, Hao Lyu

    Due to the need for aviation safety, convenience, and efficiency, the autopilot has been introduced into the cockpit. The fast development of the autopilot has brought great benefits to the aviation industry. On the human side, the flight deck has been designed to be a complex, tightly-coupled, and spatially distributed system. The problem of dysfunctional interaction between the pilot and the automation (human-automation interaction issue) has become more and more visible. Thus, detection of a mismatch between the pilot's expectation and automation's behavior in a timely manner is required. In order to solve this challenging problem, separate modeling of the pilot and the automation is necessary. In this thesis, an intent-based framework is introduced to detect the human-automation interaction issue. Under this framework, the pilot's expectation of the aircraft is modeled by pilot intent while the behavior of the automation system is modeled by automation intent. The mode confusion is detected when the automation intent differs from the pilot intent. The pilot intent is inferred by comparing the target value set by the pilot with the aircraft's current state. Meanwhile, the automation intent is inferred through the Generalized Fuzzy Hidden Markov Model (GFHMM), which is an extension of the classical Hidden Markov Model. The stochastic characteristic of the ``hidden'' intents is considered by introducing fuzzy logic. Different from the previous approaches of inferring automation intent, GFHMM does not require a probabilistic model for certain flight modes as prior knowledge. The parameters of GFHMM (initial fuzzy density of the intent, fuzzy transmission density, and fuzzy emission density) are determined through the flight data by using a machine learning technique, the Fuzzy C-Means clustering algorithm (FCM). Lastly, both the pilot's and automation's intent inference algorithms and the mode confusion detection method are validated through flight data.

  6. Markov Chain-Like Quantum Biological Modeling of Mutations, Aging, and Evolution.

    PubMed

    Djordjevic, Ivan B

    2015-08-24

    Recent evidence suggests that quantum mechanics is relevant in photosynthesis, magnetoreception, enzymatic catalytic reactions, olfactory reception, photoreception, genetics, electron-transfer in proteins, and evolution; to mention few. In our recent paper published in Life, we have derived the operator-sum representation of a biological channel based on codon basekets, and determined the quantum channel model suitable for study of the quantum biological channel capacity. However, this model is essentially memoryless and it is not able to properly model the propagation of mutation errors in time, the process of aging, and evolution of genetic information through generations. To solve for these problems, we propose novel quantum mechanical models to accurately describe the process of creation spontaneous, induced, and adaptive mutations and their propagation in time. Different biological channel models with memory, proposed in this paper, include: (i) Markovian classical model, (ii) Markovian-like quantum model, and (iii) hybrid quantum-classical model. We then apply these models in a study of aging and evolution of quantum biological channel capacity through generations. We also discuss key differences of these models with respect to a multilevel symmetric channel-based Markovian model and a Kimura model-based Markovian process. These models are quite general and applicable to many open problems in biology, not only biological channel capacity, which is the main focus of the paper. We will show that the famous quantum Master equation approach, commonly used to describe different biological processes, is just the first-order approximation of the proposed quantum Markov chain-like model, when the observation interval tends to zero. One of the important implications of this model is that the aging phenotype becomes determined by different underlying transition probabilities in both programmed and random (damage) Markov chain-like models of aging, which are mutually

  7. Effects of higher order control systems on aircraft approach and landing longitudinal handling qualities

    NASA Technical Reports Server (NTRS)

    Pasha, M. A.; Dazzo, J. J.; Silverthorn, J. T.

    1982-01-01

    An investigation of approach and landing longitudinal flying qualities, based on data generated using a variable stability NT-33 aircraft combined with significant control system dynamics is described. An optimum pilot lead time for pitch tracking, flight path angle tracking, and combined pitch and flight path angle tracking tasks is determined from a closed loop simulation using integral squared error (ISE) as a performance measure. Pilot gain and lead time were varied in the closed loop simulation of the pilot and aircraft to obtain the best performance for different control system configurations. The results lead to the selection of an optimum lead time using ISE as a performance criterion. Using this value of optimum lead time, a correlation is then found between pilot rating and performance with changes in the control system and in the aircraft dynamics. It is also shown that pilot rating is closely related to pilot workload which, in turn, is related to the amount of lead which the pilot must generate to obtain satisfactory response. The results also indicate that the pilot may use pitch angle tracking for the approach task and then add flight path angle tracking for the flare and touchdown.

  8. Combinatorial approach to generalized Bell and Stirling numbers and boson normal ordering problem

    SciTech Connect

    Mendez, M.A.; Blasiak, P.; Penson, K.A.

    2005-08-01

    We consider the numbers arising in the problem of normal ordering of expressions in boson creation a{sup {dagger}} and annihilation a operators ([a,a{sup {dagger}}]=1). We treat a general form of a boson string (a{sup {dagger}}){sup r{sub n}}a{sup s{sub n}}...(a{sup {dagger}}){sup r{sub 2}}a{sup s{sub 2}}(a{sup {dagger}}){sup r{sub 1}}a{sup s{sub 1}} which is shown to be associated with generalizations of Stirling and Bell numbers. The recurrence relations and closed-form expressions (Dobinski-type formulas) are obtained for these quantities by both algebraic and combinatorial methods. By extensive use of methods of combinatorial analysis we prove the equivalence of the aforementioned problem to the enumeration of special families of graphs. This link provides a combinatorial interpretation of the numbers arising in this normal ordering problem.

  9. Second-order corrections to neutrino two-flavor oscillation parameters in the wave packet approach

    NASA Astrophysics Data System (ADS)

    Bernardini, A. E.; Guzzo, M. M.; Torres, F. R.

    2006-11-01

    We report about an analytic study involving the intermediate wave packet formalism for quantifying the physically relevant information which appears in the neutrino two-flavor conversion formula and helping us to obtain more precise limits and ranges for neutrino flavor oscillation. By following the sequence of analytic approximations where we assume a strictly peaked momentum distribution and consider the second-order corrections in a power series expansion of the energy, we point out a residual time-dependent phase which, coupled with the spreading/slippage effects, can subtly modify the neutrino-oscillation parameters and limits. Such second-order effects are usually ignored in the relativistic wave packet treatment, but they present an evident dependence on the propagation regime so that some small modifications to the oscillation pattern, even in the ultra-relativistic limit, can be quantified. These modifications are implemented in the confrontation with the neutrino-oscillation parameter range (mass-squared difference Δm2 and the mixing angle θ) where we assume the same wave packet parameters previously noticed in the literature in a kind of toy model for some reactor experiments. Generically speaking, our analysis parallels the recent experimental purposes which are concerned with higher precision parameter measurements. To summarize, we show that the effectiveness of a more accurate determination of Δm2 and θ depends on the wave packet width a and on the averaged propagating energy flux E¯ which still correspond to open variables for some classes of experiments.

  10. Constraint-preserving boundary conditions in the 3+1 first-order approach

    SciTech Connect

    Bona, C.; Bona-Casas, C.

    2010-09-15

    A set of energy-momentum constraint-preserving boundary conditions is proposed for the first-order Z4 case. The stability of a simple numerical implementation is tested in the linear regime (robust stability test), both with the standard corner and vertex treatment and with a modified finite-differences stencil for boundary points which avoids corners and vertices even in Cartesian-like grids. Moreover, the proposed boundary conditions are tested in a strong-field scenario, the Gowdy waves metric, showing the expected rate of convergence. The accumulated amount of energy-momentum constraint violations is similar or even smaller than the one generated by either periodic or reflection conditions, which are exact in the Gowdy waves case. As a side theoretical result, a new symmetrizer is explicitly given, which extends the parametric domain of symmetric hyperbolicity for the Z4 formalism. The application of these results to first-order Baumgarte-Shapiro-Shibata-Nakamura-like formalisms is also considered.

  11. A first-order time-domain Green's function approach to supersonic unsteady flow

    NASA Technical Reports Server (NTRS)

    Freedman, M. I.; Tseng, K.

    1985-01-01

    A time-domain Green's Function Method for unsteady supersonic potential flow around complex aircraft configurations is presented. The focus is on the supersonic range wherein the linear potential flow assumption is valid. The Green's function method is employed in order to convert the potential-flow differential equation into an integral one. This integral equation is then discretized, in space through standard finite-element technique, and in time through finite-difference, to yield a linear algebraic system of equations relating the unknown potential to its prescribed co-normalwash (boundary condition) on the surface of the aircraft. The arbitrary complex aircraft configuration is discretized into hyperboloidal (twisted quadrilateral) panels. The potential and co-normalwash are assumed to vary linearly within each panel. Consistent with the spatial linear (first-order) finite-element approximations, the potential and co-normalwash are assumed to vary linearly in time. The long range goal of our research is to develop a comprehensive theory for unsteady supersonic potential aerodynamics which is capable of yielding accurate results even in the low supersonic (i.e., high transonic) range.

  12. Higher-order effects on the properties of the optical compact bright pulse: Collective variable approach

    NASA Astrophysics Data System (ADS)

    Pokam Nguewawe, Chancelor; Fewo, Serge I.; Yemélé, David

    2017-01-01

    The effects of higher-order (HO) terms on the properties of the compact bright (CB) pulse described by the dispersionless nonlocal nonlinear Schrödinger (DNNLS) equation are investigated. These effects include third-order dispersion (TOD), the Raman term, and the time derivative of the pulse envelope. By means of the collective variable method, the dynamical behavior of the pulse amplitude, width, frequency, velocity, phase, and chirp during propagation is pointed out. The results indicate that the CB pulse experiences a self-frequency shift and self-steepening, respectively, in the presence of an isolated Raman term and the time derivative of the pulse envelope and acquires a velocity as the result of the TOD effect. In addition, TOD may also induce the breathing mode inside the variation of the pulse parameters when the width of the input pulse is slightly less than that of the unperturbed CB pulse. The combination of these terms, indispensable for describing ultrashort pulses, reproduces all these phenomena in the CB pulse behavior. Further, other properties are observed, namely, the pulse decay, the breathing mode even when the unperturbed CB pulse is taken as the input signal, and the attenuated pulse. These results are in good agreement with the results of the direct numerical simulations of the DNNLS equation with HO terms.

  13. Hidden Markov models for evolution and comparative genomics analysis.

    PubMed

    Bykova, Nadezda A; Favorov, Alexander V; Mironov, Andrey A

    2013-01-01

    The problem of reconstruction of ancestral states given a phylogeny and data from extant species arises in a wide range of biological studies. The continuous-time Markov model for the discrete states evolution is generally used for the reconstruction of ancestral states. We modify this model to account for a case when the states of the extant species are uncertain. This situation appears, for example, if the states for extant species are predicted by some program and thus are known only with some level of reliability; it is common for bioinformatics field. The main idea is formulation of the problem as a hidden Markov model on a tree (tree HMM, tHMM), where the basic continuous-time Markov model is expanded with the introduction of emission probabilities of observed data (e.g. prediction scores) for each underlying discrete state. Our tHMM decoding algorithm allows us to predict states at the ancestral nodes as well as to refine states at the leaves on the basis of quantitative comparative genomics. The test on the simulated data shows that the tHMM approach applied to the continuous variable reflecting the probabilities of the states (i.e. prediction score) appears to be more accurate then the reconstruction from the discrete states assignment defined by the best score threshold. We provide examples of applying our model to the evolutionary analysis of N-terminal signal peptides and transcription factor binding sites in bacteria. The program is freely available at http://bioinf.fbb.msu.ru/~nadya/tHMM and via web-service at http://bioinf.fbb.msu.ru/treehmmweb.

  14. Searching for convergence in phylogenetic Markov chain Monte Carlo.

    PubMed

    Beiko, Robert G; Keith, Jonathan M; Harlow, Timothy J; Ragan, Mark A

    2006-08-01

    Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a "metachain" to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely.

  15. Measuring the Edwards-Anderson order parameter of the Bose glass: A quantum gas microscope approach

    NASA Astrophysics Data System (ADS)

    Thomson, S. J.; Walker, L. S.; Harte, T. L.; Bruce, G. D.

    2016-11-01

    With the advent of spatially resolved fluorescence imaging in quantum gas microscopes, it is now possible to directly image glassy phases and probe the local effects of disorder in a highly controllable setup. Here we present numerical calculations using a spatially resolved local mean-field theory, show that it captures the essential physics of the disordered system, and use it to simulate the density distributions seen in single-shot fluorescence microscopy. From these simulated images we extract local properties of the phases which are measurable by a quantum gas microscope and show that unambiguous detection of the Bose glass is possible. In particular, we show that experimental determination of the Edwards-Anderson order parameter is possible in a strongly correlated quantum system using existing experiments. We also suggest modifications to the experiments which will allow further properties of the Bose glass to be measured.

  16. A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.

    PubMed

    Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen

    2015-01-01

    Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method.

  17. Finite Markov Chains and Random Discrete Structures

    DTIC Science & Technology

    1994-07-26

    arrays with fixed margins 4. Persi Diaconis and Susan Holmes, Three Examples of Monte- Carlo Markov Chains: at the Interface between Statistical Computing...solutions for a math- ematical model of thermomechanical phase transitions in shape memory materials with Landau- Ginzburg free energy 1168 Angelo Favini

  18. Semi-Markov Unreliability Range Evaluator (SURE)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1989-01-01

    Analysis tool for reconfigurable, fault-tolerant systems, SURE provides efficient way to calculate accurate upper and lower bounds for death state probabilities for large class of semi-Markov models. Calculated bounds close enough for use in reliability studies of ultrareliable computer systems. Written in PASCAL for interactive execution and runs on DEC VAX computer under VMS.

  19. Markov Chain Estimation of Avian Seasonal Fecundity

    EPA Science Inventory

    To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...

  20. Multiscale Representations of Markov Random Fields

    DTIC Science & Technology

    1992-09-08

    modeling a wide variety of biological, chelmical, electrical, mechanical and economic phenomena, [10]. Moreover, the Markov structure makes the models...Transactions on Informlation Theory, 18:232-240, March 1972. [65] J. WOODS AND C. RADEWAN, "Kalman Filtering in Two Dimensions," IEEE Trans- actions on

  1. Approaches to adaptive digital control focusing on the second order modal descriptions of large, flexible spacecraft dynamics

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.

    1979-01-01

    The widespread modal analysis of flexible spacecraft and recognition of the poor a priori parameterization possible of the modal descriptions of individual structures have prompted the consideration of adaptive modal control strategies for distributed parameter systems. The current major approaches to computationally efficient adaptive digital control useful in these endeavors are explained in an original, lucid manner using modal second order structure dynamics for algorithm explication. Difficulties in extending these lumped-parameter techniques to distributed-parameter system expansion control are cited.

  2. Hidden Markov modeling for single channel kinetics with filtering and correlated noise.

    PubMed Central

    Qin, F; Auerbach, A; Sachs, F

    2000-01-01

    Hidden Markov modeling (HMM) can be applied to extract single channel kinetics at signal-to-noise ratios that are too low for conventional analysis. There are two general HMM approaches: traditional Baum's reestimation and direct optimization. The optimization approach has the advantage that it optimizes the rate constants directly. This allows setting constraints on the rate constants, fitting multiple data sets across different experimental conditions, and handling nonstationary channels where the starting probability of the channel depends on the unknown kinetics. We present here an extension of this approach that addresses the additional issues of low-pass filtering and correlated noise. The filtering is modeled using a finite impulse response (FIR) filter applied to the underlying signal, and the noise correlation is accounted for using an autoregressive (AR) process. In addition to correlated background noise, the algorithm allows for excess open channel noise that can be white or correlated. To maximize the efficiency of the algorithm, we derive the analytical derivatives of the likelihood function with respect to all unknown model parameters. The search of the likelihood space is performed using a variable metric method. Extension of the algorithm to data containing multiple channels is described. Examples are presented that demonstrate the applicability and effectiveness of the algorithm. Practical issues such as the selection of appropriate noise AR orders are also discussed through examples. PMID:11023898

  3. Dynamic Context-Aware Event Recognition Based on Markov Logic Networks

    PubMed Central

    Liu, Fagui; Deng, Dacheng; Li, Ping

    2017-01-01

    Event recognition in smart spaces is an important and challenging task. Most existing approaches for event recognition purely employ either logical methods that do not handle uncertainty, or probabilistic methods that can hardly manage the representation of structured information. To overcome these limitations, especially in the situation where the uncertainty of sensing data is dynamically changing over the time, we propose a multi-level information fusion model for sensing data and contextual information, and also present a corresponding method to handle uncertainty for event recognition based on Markov logic networks (MLNs) which combine the expressivity of first order logic (FOL) and the uncertainty disposal of probabilistic graphical models (PGMs). Then we put forward an algorithm for updating formula weights in MLNs to deal with data dynamics. Experiments on two datasets from different scenarios are conducted to evaluate the proposed approach. The results show that our approach (i) provides an effective way to recognize events by using the fusion of uncertain data and contextual information based on MLNs and (ii) outperforms the original MLNs-based method in dealing with dynamic data. PMID:28257113

  4. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem

  5. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem

  6. A bottom-up approach for the synthesis of highly ordered fullerene-intercalated graphene hybrids

    NASA Astrophysics Data System (ADS)

    Gournis, Dimitrios; Kouloumpis, Antonios; Dimos, Konstantinos; Spyrou, Konstantinos; Georgakilas, Vasilios; Rudolf, Petra

    2015-02-01

    Much of the research effort on graphene focuses on its use as a building block for the development of new hybrid nanostructures with well-defined dimensions and properties suitable for applications such as gas storage, heterogeneous catalysis, gas/liquid separations, nanosensing and biomedicine. Towards this aim, here we describe a new bottom-up approach, which combines self-assembly with the Langmuir Schaefer deposition technique to synthesize graphene-based layered hybrid materials hosting fullerene molecules within the interlayer space. Our film preparation consists in a bottom-up layer-by-layer process that proceeds via the formation of a hybrid organo-graphene oxide Langmuir film. The structure and composition of these hybrid fullerene-containing thin multilayers deposited on hydrophobic substrates were characterized by a combination of X-ray diffraction, Raman and X-ray photoelectron spectroscopies, atomic force microscopy and conductivity measurements. The latter revealed that the presence of C60 within the interlayer spacing leads to an increase in electrical conductivity of the hybrid material as compared to the organo-graphene matrix alone.

  7. The algebra of the general Markov model on phylogenetic trees and networks.

    PubMed

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  8. A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences

    PubMed Central

    Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen

    2015-01-01

    Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method. PMID:26161131

  9. Fractional-order elastic models of cartilage: A multi-scale approach

    NASA Astrophysics Data System (ADS)

    Magin, Richard L.; Royston, Thomas J.

    2010-03-01

    The objective of this research is to develop new quantitative methods to describe the elastic properties (e.g., shear modulus, viscosity) of biological tissues such as cartilage. Cartilage is a connective tissue that provides the lining for most of the joints in the body. Tissue histology of cartilage reveals a multi-scale architecture that spans a wide range from individual collagen and proteoglycan molecules to families of twisted macromolecular fibers and fibrils, and finally to a network of cells and extracellular matrix that form layers in the connective tissue. The principal cells in cartilage are chondrocytes that function at the microscopic scale by creating nano-scale networks of proteins whose biomechanical properties are ultimately expressed at the macroscopic scale in the tissue's viscoelasticity. The challenge for the bioengineer is to develop multi-scale modeling tools that predict the three-dimensional macro-scale mechanical performance of cartilage from micro-scale models. Magnetic resonance imaging (MRI) and MR elastography (MRE) provide a basis for developing such models based on the nondestructive biomechanical assessment of cartilage in vitro and in vivo. This approach, for example, uses MRI to visualize developing proto-cartilage structure, MRE to characterize the shear modulus of such structures, and fractional calculus to describe the dynamic behavior. Such models can be extended using hysteresis modeling to account for the non-linear nature of the tissue. These techniques extend the existing computational methods to predict stiffness and strength, to assess short versus long term load response, and to measure static versus dynamic response to mechanical loads over a wide range of frequencies (50-1500 Hz). In the future, such methods can perhaps be used to help identify early changes in regenerative connective tissue at the microscopic scale and to enable more effective diagnostic monitoring of the onset of disease.

  10. Applying diffusion-based Markov chain Monte Carlo

    PubMed Central

    Paul, Rajib; Berliner, L. Mark

    2017-01-01

    We examine the performance of a strategy for Markov chain Monte Carlo (MCMC) developed by simulating a discrete approximation to a stochastic differential equation (SDE). We refer to the approach as diffusion MCMC. A variety of motivations for the approach are reviewed in the context of Bayesian analysis. In particular, implementation of diffusion MCMC is very simple to set-up, even in the presence of nonlinear models and non-conjugate priors. Also, it requires comparatively little problem-specific tuning. We implement the algorithm and assess its performance for both a test case and a glaciological application. Our results demonstrate that in some settings, diffusion MCMC is a faster alternative to a general Metropolis-Hastings algorithm. PMID:28301529

  11. Hidden Markov latent variable models with multivariate longitudinal data.

    PubMed

    Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

    2017-03-01

    Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use.

  12. Markov models of molecular kinetics: generation and validation.

    PubMed

    Prinz, Jan-Hendrik; Wu, Hao; Sarich, Marco; Keller, Bettina; Senne, Martin; Held, Martin; Chodera, John D; Schütte, Christof; Noé, Frank

    2011-05-07

    Markov state models of molecular kinetics (MSMs), in which the long-time statistical dynamics of a molecule is approximated by a Markov chain on a discrete partition of configuration space, have seen widespread use in recent years. This approach has many appealing characteristics compared to straightforward molecular dynamics simulation and analysis, including the potential to mitigate the sampling problem by extracting long-time kinetic information from short trajectories and the ability to straightforwardly calculate expectation values and statistical uncertainties of various stationary and dynamical molecular observables. In this paper, we summarize the current state of the art in generation and validation of MSMs and give some important new results. We describe an upper bound for the approximation error made by modeling molecular dynamics with a MSM and we show that this error can be made arbitrarily small with surprisingly little effort. In contrast to previous practice, it becomes clear that the best MSM is not obtained by the most metastable discretization, but the MSM can be much improved if non-metastable states are introduced near the transition states. Moreover, we show that it is not necessary to resolve all slow processes by the state space partitioning, but individual dynamical processes of interest can be resolved separately. We also present an efficient estimator for reversible transition matrices and a robust test to validate that a MSM reproduces the kinetics of the molecular dynamics data.

  13. Modeling Driver Behavior near Intersections in Hidden Markov Model.

    PubMed

    Li, Juan; He, Qinglian; Zhou, Hang; Guan, Yunlin; Dai, Wei

    2016-12-21

    Intersections are one of the major locations where safety is a big concern to drivers. Inappropriate driver behaviors in response to frequent changes when approaching intersections often lead to intersection-related crashes or collisions. Thus to better understand driver behaviors at intersections, especially in the dilemma zone, a Hidden Markov Model (HMM) is utilized in this study. With the discrete data processing, the observed dynamic data of vehicles are used for the inference of the Hidden Markov Model. The Baum-Welch (B-W) estimation algorithm is applied to calculate the vehicle state transition probability matrix and the observation probability matrix. When combined with the Forward algorithm, the most likely state of the driver can be obtained. Thus the model can be used to measure the stability and risk of driver behavior. It is found that drivers' behaviors in the dilemma zone are of lower stability and higher risk compared with those in other regions around intersections. In addition to the B-W estimation algorithm, the Viterbi Algorithm is utilized to predict the potential dangers of vehicles. The results can be applied to driving assistance systems to warn drivers to avoid possible accidents.

  14. Modeling Driver Behavior near Intersections in Hidden Markov Model

    PubMed Central

    Li, Juan; He, Qinglian; Zhou, Hang; Guan, Yunlin; Dai, Wei

    2016-01-01

    Intersections are one of the major locations where safety is a big concern to drivers. Inappropriate driver behaviors in response to frequent changes when approaching intersections often lead to intersection-related crashes or collisions. Thus to better understand driver behaviors at intersections, especially in the dilemma zone, a Hidden Markov Model (HMM) is utilized in this study. With the discrete data processing, the observed dynamic data of vehicles are used for the inference of the Hidden Markov Model. The Baum-Welch (B-W) estimation algorithm is applied to calculate the vehicle state transition probability matrix and the observation probability matrix. When combined with the Forward algorithm, the most likely state of the driver can be obtained. Thus the model can be used to measure the stability and risk of driver behavior. It is found that drivers’ behaviors in the dilemma zone are of lower stability and higher risk compared with those in other regions around intersections. In addition to the B-W estimation algorithm, the Viterbi Algorithm is utilized to predict the potential dangers of vehicles. The results can be applied to driving assistance systems to warn drivers to avoid possible accidents. PMID:28009838

  15. Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks

    NASA Astrophysics Data System (ADS)

    Johnson, Joseph

    2016-03-01

    We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called ``metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a ``supernet'' of all numerical information supporting new initiatives in AI.

  16. Quantification of heart rate variability by discrete nonstationary non-Markov stochastic processes

    NASA Astrophysics Data System (ADS)

    Yulmetyev, Renat; Hänggi, Peter; Gafarov, Fail

    2002-04-01

    We develop the statistical theory of discrete nonstationary non-Markov random processes in complex systems. The objective of this paper is to find the chain of finite-difference non-Markov kinetic equations for time correlation functions (TCF) in terms of nonstationary effects. The developed theory starts from careful analysis of time correlation through nonstationary dynamics of vectors of initial and final states and nonstationary normalized TCF. Using the projection operators technique we find the chain of finite-difference non-Markov kinetic equations for discrete nonstationary TCF and for the set of nonstationary discrete memory functions (MF's). The last one contains supplementary information about nonstationary properties of the complex system on the whole. Another relevant result of our theory is the construction of the set of dynamic parameters of nonstationarity, which contains some information of the nonstationarity effects. The full set of dynamic, spectral and kinetic parameters, and kinetic functions (TCF, short MF's statistical spectra of non-Markovity parameter, and statistical spectra of nonstationarity parameter) has made it possible to acquire the in-depth information about discreteness, non-Markov effects, long-range memory, and nonstationarity of the underlying processes. The developed theory is applied to analyze the long-time (Holter) series of RR intervals of human ECG's. We had two groups of patients: the healthy ones and the patients after myocardial infarction. In both groups we observed effects of fractality, standard and restricted self-organized criticality, and also a certain specific arrangement of spectral lines. The received results demonstrate that the power spectra of all orders (n=1,2,...) MF mn(t) exhibit the neatly expressed fractal features. We have found out that the full sets of non-Markov, discrete and nonstationary parameters can serve as reliable and powerful means of diagnosis of the cardiovascular system states and can

  17. Minimizing costs while meeting safety requirements: Modeling deterministic (imperfect) staggered tests using standard Markov models for SIL calculations.

    PubMed

    Rouvroye, Jan L; Wiegerinck, Jan A M

    2006-10-01

    In industry, potentially hazardous (technical) structures are equipped with safety systems in order to protect people, the environment, and assets from the consequences of accidents by reducing the probability of incidents occurring. Not only companies but also society will want to know what the effect of these safety measures is: society in terms of "likelihood of undesired events" and companies in addition in terms of "value for money," the expected benefits per dollar or euro invested that these systems provide. As a compromise between demands from society (the safer the better) and industry (but against what cost), in many countries government has decided to impose standards to industry with respect to safety requirements. These standards use the average probability of failure on demand as the main performance indicator for these systems, and require, for the societal reason given before, that this probability remain below a certain value depending on a given risk. The main factor commonly used in industry to "fine-tune" the average probability of failure on demand for a given system configuration in order to comply with these standards against financial risk for the company is "optimizing" the test strategy (interval, coverage, and procedure). In industry, meeting the criterion on the average probability of failure on demand is often demonstrated by using well accepted mathematical models such as Markov models from literature and adapting them for the actual situation. This paper shows the implications and potential pitfalls when using this commonly used practical approach for a situation where the test strategy is changed. Adapting an existing Markov model can lead to unexpected results, and this paper will demonstrate that a different model has to be developed. In addition, the authors propose an approach that can be applied in industry without suffering from the problems mentioned above.

  18. A Preisach approach to modeling partial phase transitions in the first order magnetocaloric material MnFe(P,As)

    NASA Astrophysics Data System (ADS)

    von Moos, L.; Bahl, C. R. H.; Nielsen, K. K.; Engelbrecht, K.; Küpferling, M.; Basso, V.

    2014-02-01

    Magnetic refrigeration is an emerging technology that could provide energy efficient and environmentally friendly cooling. Magnetocaloric materials in which a structural phase transition is found concurrently with the magnetic phase transition are often termed first order magnetocaloric materials. Such materials are potential candidates for application in magnetic refrigeration devices. However, the first order materials often have adverse properties such as hysteresis, making actual performance troublesome to quantify, a subject not thoroughly studied within this field. Here we investigate the behavior of MnFe(P,As) under partial phase transitions, which is similar to what materials experience in actual magnetic refrigeration devices. Partial phase transition curves, in the absence of a magnetic field, are measured using calorimetry and the experimental results are compared to simulations of a Preisach-type model. We show that this approach is applicable and discuss what experimental data is required to obtain a satisfactory material model.

  19. Markov-CA model using analytical hierarchy process and multiregression technique

    NASA Astrophysics Data System (ADS)

    Omar, N. Q.; Sanusi, S. A. M.; Hussin, W. M. W.; Samat, N.; Mohammed, K. S.

    2014-06-01

    The unprecedented increase in population and rapid rate of urbanisation has led to extensive land use changes. Cellular automata (CA) are increasingly used to simulate a variety of urban dynamics. This paper introduces a new CA based on an integration model built-in multi regression and multi-criteria evaluation to improve the representation of CA transition rule. This multi-criteria evaluation is implemented by utilising data relating to the environmental and socioeconomic factors in the study area in order to produce suitability maps (SMs) using an analytical hierarchical process, which is a well-known method. Before being integrated to generate suitability maps for the periods from 1984 to 2010 based on the different decision makings, which have become conditioned for the next step of CA generation. The suitability maps are compared in order to find the best maps based on the values of the root equation (R2). This comparison can help the stakeholders make better decisions. Thus, the resultant suitability map derives a predefined transition rule for the last step for CA model. The approach used in this study highlights a mechanism for monitoring and evaluating land-use and land-cover changes in Kirkuk city, Iraq owing changes in the structures of governments, wars, and an economic blockade over the past decades. The present study asserts the high applicability and flexibility of Markov-CA model. The results have shown that the model and its interrelated concepts are performing rather well.

  20. Stochastic Dynamics through Hierarchically Embedded Markov Chains

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Vítor V.; Santos, Fernando P.; Santos, Francisco C.; Pacheco, Jorge M.

    2017-02-01

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects—such as mutations in evolutionary dynamics and a random exploration of choices in social systems—including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  1. Stochastic Dynamics through Hierarchically Embedded Markov Chains.

    PubMed

    Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M

    2017-02-03

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  2. A critical appraisal of Markov state models

    NASA Astrophysics Data System (ADS)

    Schütte, Ch.; Sarich, M.

    2015-09-01

    Markov State Modelling as a concept for a coarse grained description of the essential kinetics of a molecular system in equilibrium has gained a lot of attention recently. The last 10 years have seen an ever increasing publication activity on how to construct Markov State Models (MSMs) for very different molecular systems ranging from peptides to proteins, from RNA to DNA, and via molecular sensors to molecular aggregation. Simultaneously the accompanying theory behind MSM building and approximation quality has been developed well beyond the concepts and ideas used in practical applications. This article reviews the main theoretical results, provides links to crucial new developments, outlines the full power of MSM building today, and discusses the essential limitations still to overcome.

  3. The cutoff phenomenon in finite Markov chains.

    PubMed Central

    Diaconis, P

    1996-01-01

    Natural mixing processes modeled by Markov chains often show a sharp cutoff in their convergence to long-time behavior. This paper presents problems where the cutoff can be proved (card shuffling, the Ehrenfests' urn). It shows that chains with polynomial growth (drunkard's walk) do not show cutoffs. The best general understanding of such cutoffs (high multiplicity of second eigenvalues due to symmetry) is explored. Examples are given where the symmetry is broken but the cutoff phenomenon persists. PMID:11607633

  4. Numerical methods in Markov chain modeling

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  5. Engineering the path to higher-order thinking in elementary education: A problem-based learning approach for STEM integration

    NASA Astrophysics Data System (ADS)

    Rehmat, Abeera Parvaiz

    As we progress into the 21st century, higher-order thinking skills and achievement in science and math are essential to meet the educational requirement of STEM careers. Educators need to think of innovative ways to engage and prepare students for current and future challenges while cultivating an interest among students in STEM disciplines. An instructional pedagogy that can capture students' attention, support interdisciplinary STEM practices, and foster higher-order thinking skills is problem-based learning. Problem-based learning embedded in the social constructivist view of teaching and learning (Savery & Duffy, 1995) promotes self-regulated learning that is enhanced through exploration, cooperative social activity, and discourse (Fosnot, 1996). This quasi-experimental mixed methods study was conducted with 98 fourth grade students. The study utilized STEM content assessments, a standardized critical thinking test, STEM attitude survey, PBL questionnaire, and field notes from classroom observations to investigate the impact of problem-based learning on students' content knowledge, critical thinking, and their attitude towards STEM. Subsequently, it explored students' experiences of STEM integration in a PBL environment. The quantitative results revealed a significant difference between groups in regards to their content knowledge, critical thinking skills, and STEM attitude. From the qualitative results, three themes emerged: learning approaches, increased interaction, and design and engineering implementation. From the overall data set, students described the PBL environment to be highly interactive that prompted them to employ multiple approaches, including design and engineering to solve the problem.

  6. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  7. Separating method factors and higher order traits of the Big Five: a meta-analytic multitrait-multimethod approach.

    PubMed

    Chang, Luye; Connelly, Brian S; Geeza, Alexis A

    2012-02-01

    Though most personality researchers now recognize that ratings of the Big Five are not orthogonal, the field has been divided about whether these trait intercorrelations are substantive (i.e., driven by higher order factors) or artifactual (i.e., driven by correlated measurement error). We used a meta-analytic multitrait-multirater study to estimate trait correlations after common method variance was controlled. Our results indicated that common method variance substantially inflates trait correlations, and, once controlled, correlations among the Big Five became relatively modest. We then evaluated whether two different theories of higher order factors could account for the pattern of Big Five trait correlations. Our results did not support Rushton and colleagues' (Rushton & Irwing, 2008; Rushton et al., 2009) proposed general factor of personality, but Digman's (1997) α and β metatraits (relabeled by DeYoung, Peterson, and Higgins (2002) as Stability and Plasticity, respectively) produced viable fit. However, our models showed considerable overlap between Stability and Emotional Stability and between Plasticity and Extraversion, raising the question of whether these metatraits are redundant with their dominant Big Five traits. This pattern of findings was robust when we included only studies whose observers were intimately acquainted with targets. Our results underscore the importance of using a multirater approach to studying personality and the need to separate the causes and outcomes of higher order metatraits from those of the Big Five. We discussed the implications of these findings for the array of research fields in which personality is studied.

  8. Identifying residue–residue clashes in protein hybrids by using a second-order mean-field approach

    PubMed Central

    Moore, Gregory L.; Maranas, Costas D.

    2003-01-01

    In this article, a second-order mean-field-based approach is introduced for characterizing the complete set of residue–residue couplings consistent with a given protein structure. This information is subsequently used to classify protein hybrids with respect to their potential to be functional based on the presence/absence and severity of clashing residue–residue interactions. First, atomistic representations of both the native and denatured states are used to calculate rotamer–backbone, rotamer–intrinsic, and rotamer–rotamer conformational energies. Next, this complete conformational energy table is coupled with a second-order mean-field description to elucidate the probabilities of all possible rotamer–rotamer combinations in a minimum Helmholtz free-energy ensemble. Computational results for the dihydrofolate reductase family reveal correlation in substitution patterns between not only contacting but also distal second-order structural elements. Residue–residue clashes in hybrid proteins are quantified by contrasting the ensemble probabilities of protein hybrids against the ones of the original parental sequences. Good agreement with experimental data is demonstrated by superimposing these clashes against the functional crossover profiles of bidirectional incremental truncation libraries for Escherichia coli and human glycinamide ribonucleotide transformylases. PMID:12700353

  9. From empirical data to time-inhomogeneous continuous Markov processes.

    PubMed

    Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G

    2016-03-01

    We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.

  10. Sequence alignments and pair hidden Markov models using evolutionary history.

    PubMed

    Knudsen, Bjarne; Miyamoto, Michael M

    2003-10-17

    This work presents a novel pairwise statistical alignment method based on an explicit evolutionary model of insertions and deletions (indels). Indel events of any length are possible according to a geometric distribution. The geometric distribution parameter, the indel rate, and the evolutionary time are all maximum likelihood estimated from the sequences being aligned. Probability calculations are done using a pair hidden Markov model (HMM) with transition probabilities calculated from the indel parameters. Equations for the transition probabilities make the pair HMM closely approximate the specified indel model. The method provides an optimal alignment, its likelihood, the likelihood of all possible alignments, and the reliability of individual alignment regions. Human alpha and beta-hemoglobin sequences are aligned, as an illustration of the potential utility of this pair HMM approach.

  11. Inferring phenomenological models of Markov processes from data

    NASA Astrophysics Data System (ADS)

    Rivera, Catalina; Nemenman, Ilya

    Microscopically accurate modeling of stochastic dynamics of biochemical networks is hard due to the extremely high dimensionality of the state space of such networks. Here we propose an algorithm for inference of phenomenological, coarse-grained models of Markov processes describing the network dynamics directly from data, without the intermediate step of microscopically accurate modeling. The approach relies on the linear nature of the Chemical Master Equation and uses Bayesian Model Selection for identification of parsimonious models that fit the data. When applied to synthetic data from the Kinetic Proofreading process (KPR), a common mechanism used by cells for increasing specificity of molecular assembly, the algorithm successfully uncovers the known coarse-grained description of the process. This phenomenological description has been notice previously, but this time it is derived in an automated manner by the algorithm. James S. McDonnell Foundation Grant No. 220020321.

  12. Data Stream Prediction Using Incremental Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Wakabayashi, Kei; Miura, Takao

    In this paper, we propose a new technique for time-series prediction. Here we assume that time-series data occur depending on event which is unobserved directly, and we estimate future data as output from the most likely event which will happen at the time. In this investigation we model time-series based on event sequence by using Hidden Markov Model(HMM), and extract time-series patterns as trained HMM parameters. However, we can’t apply HMM approach to data stream prediction in a straightforward manner. This is because Baum-Welch algorithm, which is traditional unsupervised HMM training algorithm, requires many stored historical data and scan it many times. Here we apply incremental Baum-Welch algorithm which is an on-line HMM training method, and estimate HMM parameters dynamically to adapt new time-series patterns. And we show some experimental results to see the validity of our method.

  13. Uncovering mental representations with Markov chain Monte Carlo.

    PubMed

    Sanborn, Adam N; Griffiths, Thomas L; Shiffrin, Richard M

    2010-03-01

    A key challenge for cognitive psychology is the investigation of mental representations, such as object categories, subjective probabilities, choice utilities, and memory traces. In many cases, these representations can be expressed as a non-negative function defined over a set of objects. We present a behavioral method for estimating these functions. Our approach uses people as components of a Markov chain Monte Carlo (MCMC) algorithm, a sophisticated sampling method originally developed in statistical physics. Experiments 1 and 2 verified the MCMC method by training participants on various category structures and then recovering those structures. Experiment 3 demonstrated that the MCMC method can be used estimate the structures of the real-world animal shape categories of giraffes, horses, dogs, and cats. Experiment 4 combined the MCMC method with multidimensional scaling to demonstrate how different accounts of the structure of categories, such as prototype and exemplar models, can be tested, producing samples from the categories of apples, oranges, and grapes.

  14. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    NASA Technical Reports Server (NTRS)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  15. Understanding eye movements in face recognition using hidden Markov models.

    PubMed

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone.

  16. Reduction Of Sizes Of Semi-Markov Reliability Models

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Palumbo, Dan L.

    1995-01-01

    Trimming technique reduces computational effort by order of magnitude while introducing negligible error. Error bound depends on only three parameters from semi-Markov model: maximum sum of rates for failure transitions leaving any state, maximum average holding time for recovery-mode state, and operating time for system. Error bound computed before any model generated, enabling modeler to decide immediately whether or not model can be trimmed. Trimming procedure specified by precise and easy description, making it easy to include trimming procedure in program generating mathematical models for use in assessing reliability. Typical application of technique in design of digital control systems required to be extremely reliable. In addition to aerospace applications, fault-tolerant design has growing importance in wide range of industrial applications.

  17. A hidden Markov model for space-time precipitation

    SciTech Connect

    Zucchini, W. ); Guttorp, P. )

    1991-08-01

    Stochastic models for precipitation events in space and time over mesoscale spatial areas have important applications in hydrology, both as input to runoff models and as parts of general circulation models (GCMs) of global climate. A family of multivariate models for the occurrence/nonoccurrence of precipitation at N sites is constructed by assuming a different probability of events at the sites for each of a number of unobservable climate states. The climate process is assumed to follow a Markov chain. Simple formulae for first- and second-order parameter functions are derived, and used to find starting values for a numerical maximization of the likelihood. The method is illustrated by applying it to data for one site in Washington and to data for a network in the Great plains.

  18. Markov chains and semi-Markov models in time-to-event analysis

    PubMed Central

    Abner, Erin L.; Charnigo, Richard J.; Kryscio, Richard J.

    2014-01-01

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields. PMID:24818062

  19. Quantitative safety assessment of computer based I and C systems via modular Markov analysis

    SciTech Connect

    Elks, C. R.; Yu, Y.; Johnson, B. W.

    2006-07-01

    This paper gives a brief overview of the methodology based on quantitative metrics for evaluating digital I and C system that has been under development at the Univ. of Virginia for a number years. Our quantitative assessment methodology is based on three well understood and extensively practiced disciplines in the dependability assessment field: (1) System level fault modeling and fault injection, (2) safety and coverage based dependability modeling methods, and (3) statistical estimation of model parameters used for safety predication. There are two contributions of this paper; the first contribution is related to incorporating design flaw information into homogenous Markov models when such data is available. The second is to introduce a Markov modeling method for managing the modeling complexities of large distributed I and C systems for the predication of safety and reliability. The method is called Modular Markov Chain analysis. This method allows Markov models of the system to be composed in a modular manner. In doing so, it address two important issues. (1) The models are more visually representative of the functional the system. (2) Important failure dependencies that naturally occur in complex systems are modeled accurately with our approach. (authors)

  20. Inferring species interactions from co-occurrence data with Markov networks.

    PubMed

    Harris, David J

    2016-12-01

    Inferring species interactions from co-occurrence data is one of the most controversial tasks in community ecology. One difficulty is that a single pairwise interaction can ripple through an ecological network and produce surprising indirect consequences. For example, the negative correlation between two competing species can be reversed in the presence of a third species that outcompetes both of them. Here, I apply models from statistical physics, called Markov networks or Markov random fields, that can predict the direct and indirect consequences of any possible species interaction matrix. Interactions in these models can be estimated from observed co-occurrence rates via maximum likelihood, controlling for indirect effects. Using simulated landscapes with known interactions, I evaluated Markov networks and six existing approaches. Markov networks consistently outperformed the other methods, correctly isolating direct interactions between species pairs even when indirect interactions or abiotic factors largely overpowered them. Two computationally efficient approximations, which controlled for indirect effects with partial correlations or generalized linear models, also performed well. Null models showed no evidence of being able to control for indirect effects, and reliably yielded incorrect inferences when such effects were present.

  1. Treatment-based Markov chain models clarify mechanisms of invasion in an invaded grassland community

    PubMed Central

    Nelis, Lisa Castillo; Wootton, J. Timothy

    2010-01-01

    What are the relative roles of mechanisms underlying plant responses in grassland communities invaded by both plants and mammals? What type of community can we expect in the future given current or novel conditions? We address these questions by comparing Markov chain community models among treatments from a field experiment on invasive species on Robinson Crusoe Island, Chile. Because of seed dispersal, grazing and disturbance, we predicted that the exotic European rabbit (Oryctolagus cuniculus) facilitates epizoochorous exotic plants (plants with seeds that stick to the skin an animal) at the expense of native plants. To test our hypothesis, we crossed rabbit exclosure treatments with disturbance treatments, and sampled the plant community in permanent plots over 3 years. We then estimated Markov chain model transition probabilities and found significant differences among treatments. As hypothesized, this modelling revealed that exotic plants survive better in disturbed areas, while natives prefer no rabbits or disturbance. Surprisingly, rabbits negatively affect epizoochorous plants. Markov chain dynamics indicate that an overall replacement of native plants by exotic plants is underway. Using a treatment-based approach to multi-species Markov chain models allowed us to examine the changes in the importance of mechanisms in response to experimental impacts on communities. PMID:19864293

  2. Dynamic Bandwidth Provisioning Using Markov Chain Based on RSVP

    DTIC Science & Technology

    2013-09-01

    Cambridge University Press,2008. [20] P. Bremaud, Markov Chains : Gibbs Fields, Monte Carlo Simulation and Queues, New York, NY, Springer Science...is successful.  Qualnet, a simulation platform for the wireless environment is used to simulate the algorithm (integration of Markov chain ...in Qualnet, the simulation platform used. 16 THIS PAGE INTENTIONALLY LEFT BLANK 17 III. GENERAL DISCUSSION OF MARKOV CHAIN ALGORITHM AND RSVP

  3. Threshold partitioning of sparse matrices and applications to Markov chains

    SciTech Connect

    Choi, Hwajeong; Szyld, D.B.

    1996-12-31

    It is well known that the order of the variables and equations of a large, sparse linear system influences the performance of classical iterative methods. In particular if, after a symmetric permutation, the blocks in the diagonal have more nonzeros, classical block methods have a faster asymptotic rate of convergence. In this paper, different ordering and partitioning algorithms for sparse matrices are presented. They are modifications of PABLO. In the new algorithms, in addition to the location of the nonzeros, the values of the entries are taken into account. The matrix resulting after the symmetric permutation has dense blocks along the diagonal, and small entries in the off-diagonal blocks. Parameters can be easily adjusted to obtain, for example, denser blocks, or blocks with elements of larger magnitude. In particular, when the matrices represent Markov chains, the permuted matrices are well suited for block iterative methods that find the corresponding probability distribution. Applications to three types of methods are explored: (1) Classical block methods, such as Block Gauss Seidel. (2) Preconditioned GMRES, where a block diagonal preconditioner is used. (3) Iterative aggregation method (also called aggregation/disaggregation) where the partition obtained from the ordering algorithm with certain parameters is used as an aggregation scheme. In all three cases, experiments are presented which illustrate the performance of the methods with the new orderings. The complexity of the new algorithms is linear in the number of nonzeros and the order of the matrix, and thus adding little computational effort to the overall solution.

  4. Synthesis of three-dimensionally ordered macro-/mesoporous Pt with high electrocatalytic activity by a dual-templating approach

    NASA Astrophysics Data System (ADS)

    Zhang, Chengwei; Yang, Hui; Sun, Tingting; Shan, Nannan; Chen, Jianfeng; Xu, Lianbin; Yan, Yushan

    2014-01-01

    Three dimensionally ordered macro-/mesoporous (3DOM/m) Pt catalysts are fabricated by chemical reduction employing a dual-templating synthesis approach combining both colloidal crystal (opal) templating (hard-templating) and lyotropic liquid crystal templating (soft-templating) techniques. The macropore walls of the prepared 3DOM/m Pt exhibit a uniform mesoporous structure composed of polycrystalline Pt nanoparticles. Both the size of the mesopores and Pt nanocrystallites are in the range of 3-5 nm. The 3DOM/m Pt catalyst shows a larger electrochemically active surface area (ECSA), and higher catalytic activity as well as better poisoning tolerance for methanol oxidation reaction (MOR) than the commercial Pt black catalyst.

  5. First order reversal curves and intrinsic parameter determination for magnetic materials; limitations of hysteron-based approaches in correlated systems

    NASA Astrophysics Data System (ADS)

    Ruta, Sergiu; Hovorka, Ondrej; Huang, Pin-Wei; Wang, Kangkang; Ju, Ganping; Chantrell, Roy

    2017-03-01

    The generic problem of extracting information on intrinsic particle properties from the whole class of interacting magnetic fine particle systems is a long standing and difficult inverse problem. As an example, the Switching Field Distribution (SFD) is an important quantity in the characterization of magnetic systems, and its determination in many technological applications, such as recording media, is especially challenging. Techniques such as the first order reversal curve (FORC) methods, were developed to extract the SFD from macroscopic measurements. However, all methods rely on separating the contributions to the measurements of the intrinsic SFD and the extrinsic effects of magnetostatic and exchange interactions. We investigate the underlying physics of the FORC method by applying it to the output predictions of a kinetic Monte-Carlo model with known input parameters. We show that the FORC method is valid only in cases of weak spatial correlation of the magnetisation and suggest a more general approach.

  6. Second-order perturbative corrections to the restricted active space configuration interaction with the hole and particle approach

    SciTech Connect

    Casanova, David

    2014-04-14

    Second-order corrections to the restricted active space configuration interaction (RASCI) with the hole and particle truncation of the excitation operator are developed. Theoretically, the computational cost of the implemented perturbative approach, abbreviated as RASCI(2), grows like its single reference counterpart in MP2. Two different forms of RASCI(2) have been explored, that is the generalized Davidson-Kapuy and the Epstein-Nesbet partitions of the Hamiltonian. The preliminary results indicate that the use of energy level shift of a few tenths of a Hartree might systematically improve the accuracy of the RASCI(2) energies. The method has been tested in the computation of the ground state energy profiles along the dissociation of the hydrogen fluoride and N{sub 2} molecules, the computation of correlation energy in the G2/97 molecular test set, and in the computation of excitation energies to low-lying states in small organic molecules.

  7. First order reversal curves and intrinsic parameter determination for magnetic materials; limitations of hysteron-based approaches in correlated systems

    PubMed Central

    Ruta, Sergiu; Hovorka, Ondrej; Huang, Pin-Wei; Wang, Kangkang; Ju, Ganping; Chantrell, Roy

    2017-01-01

    The generic problem of extracting information on intrinsic particle properties from the whole class of interacting magnetic fine particle systems is a long standing and difficult inverse problem. As an example, the Switching Field Distribution (SFD) is an important quantity in the characterization of magnetic systems, and its determination in many technological applications, such as recording media, is especially challenging. Techniques such as the first order reversal curve (FORC) methods, were developed to extract the SFD from macroscopic measurements. However, all methods rely on separating the contributions to the measurements of the intrinsic SFD and the extrinsic effects of magnetostatic and exchange interactions. We investigate the underlying physics of the FORC method by applying it to the output predictions of a kinetic Monte-Carlo model with known input parameters. We show that the FORC method is valid only in cases of weak spatial correlation of the magnetisation and suggest a more general approach. PMID:28338056

  8. Highly Ordered Porous Anodic Alumina with Large Diameter Pores Fabricated by an Improved Two-Step Anodization Approach.

    PubMed

    Li, Xiaohong; Ni, Siyu; Zhou, Xingping

    2015-02-01

    The aim of this study is to prepare highly ordered porous anodic alumina (PAA) with large pore sizes (> 200 nm) by an improved two-step anodization approach which combines the first hard anodization in oxalic acid-water-ethanol system and second mild anodization in phosphoric acid-water-ethanol system. The surface morphology and elemental composition of PAA are characterized by field emission scanning electron microscopy (FESEM) and energy-dispersive X-ray spectrometer (EDS). The effects of matching of two-step anodizing voltages on the regularity of pore arrangement is evaluated and discussed. Moreover, the pore formation mechanism is also discussed. The results show that the nanopore arrays on all the PAA samples are in a highly regular arrangement and the pore size is adjustable in the range of 200-300 nm. EDS analysis suggests that the main elements of the as-prepared PAA are oxygen, aluminum and a small amount of phosphorus. Furthermore, the voltage in the first anodization must match well with that in the second anodization, which has significant influence on the PAA regularity. The addition of ethanol to the electrolytes effectively accelerates the diffusion of the heat that evolves from the sample, and decreases the steady current to keep the steady growth of PAA film. The improved two-step anodization approach in this study breaks through the restriction of small pore size in oxalic acid and overcomes the drawbacks of irregular pore morphology in phosphoric acid, and is an efficient way to fabricate large diameter ordered PAA.

  9. Composition of web services using Markov decision processes and dynamic programming.

    PubMed

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.

  10. Extracting duration information in a picture category decoding task using hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg

    2016-04-01

    Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.

  11. Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    PubMed Central

    Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael

    2015-01-01

    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247

  12. Markov state model of the two-state behaviour of water

    NASA Astrophysics Data System (ADS)

    Hamm, Peter

    2016-10-01

    With the help of a Markov State Model (MSM), two-state behaviour is resolved for two computer models of water in a temperature range from 255 K to room temperature (295 K). The method is first validated for ST2 water, for which the so far strongest evidence for a liquid-liquid phase transition exists. In that case, the results from the MSM can be cross-checked against the radial distribution function g5(r) of the 5th-closest water molecule around a given reference water molecule. The latter is a commonly used local order parameter, which exhibits a bimodal distribution just above the liquid-liquid critical point that represents the low-density form of the liquid (LDL) and the high density liquid. The correlation times and correlation lengths of the corresponding spatial domains are calculated and it is shown that they are connected via a simple diffusion model. Once the approach is established, TIP4P/2005 will be considered, which is the much more realistic representation of real water. The MSM can resolve two-state behavior also in that case, albeit with significantly smaller correlation times and lengths. The population of LDL-like water increases with decreasing temperature, thereby explaining the density maximum at 4 °C along the lines of the two-state model of water.

  13. Real-time classification of humans versus animals using profiling sensors and hidden Markov tree model

    NASA Astrophysics Data System (ADS)

    Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant

    2015-07-01

    Linear pyroelectric array sensors have enabled useful classifications of objects such as humans and animals to be performed with relatively low-cost hardware in border and perimeter security applications. Ongoing research has sought to improve the performance of these sensors through signal processing algorithms. In the research presented here, we introduce the use of hidden Markov tree (HMT) models for object recognition in images generated by linear pyroelectric sensors. HMTs are trained to statistically model the wavelet features of individual objects through an expectation-maximization learning process. Human versus animal classification for a test object is made by evaluating its wavelet features against the trained HMTs using the maximum-likelihood criterion. The classification performance of this approach is compared to two other techniques; a texture, shape, and spectral component features (TSSF) based classifier and a speeded-up robust feature (SURF) classifier. The evaluation indicates that among the three techniques, the wavelet-based HMT model works well, is robust, and has improved classification performance compared to a SURF-based algorithm in equivalent computation time. When compared to the TSSF-based classifier, the HMT model has a slightly degraded performance but almost an order of magnitude improvement in computation time enabling real-time implementation.

  14. A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data

    PubMed Central

    Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing

    2015-01-01

    Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing. PMID:26710073

  15. Extracting duration information in a picture category decoding task using hidden Markov Models

    PubMed Central

    Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y; Schoenfeld, Mircea A; Knight, Robert T; Rose, Georg

    2016-01-01

    Objective Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain–computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations. PMID:26859831

  16. Detection of unusual optical flow patterns by multilevel hidden Markov models

    NASA Astrophysics Data System (ADS)

    Utasi, Ákos; Czúni, László

    2010-01-01

    The analysis of motion information is one of the main tools for the understanding of complex behaviors in video. However, due to the quality of the optical flow of low-cost surveillance camera systems and the complexity of motion, new robust image-processing methods are required to generate reliable higher-level information. In our novel approach there is no need for tracking objects (vehicles, pedestrians) in order to recognize anomalous motion, but dense optical flow information is used to construct mixtures of Gaussians, which are analyzed temporally. We create a multilevel model, where low-level states of non-overlapping image regions are modeled by continuous hidden Markov models (HMMs). From low-level HMMs we compose high-level HMMs to analyze the occurrence of the low-level states. The processing of large numbers of data in traditional HMMs can result in a precision problem due to the multiplication of low probability values. Thus, besides introducing new motion models, we incorporate a scaling technique into the mathematical model of HMMs to avoid precision problems and to get an effective tool for the analysis of large numbers of motion vectors. We illustrate the use of our models with real-life traffic videos.

  17. A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.

    PubMed

    Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing

    2015-01-01

    Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing.

  18. Generalised nonlinear l2-l∞ filtering of discrete-time Markov jump descriptor systems

    NASA Astrophysics Data System (ADS)

    Li, Lin; Zhong, Lei

    2014-03-01

    This paper is devoted to the l2-l∞ filter design problem for nonlinear discrete-time Markov jump descriptor systems subject to partially unknown transition probabilities. The partially unknown transition probabilities are modelled via the polytopic uncertainties. The objective is to propose a generalised nonlinear full-order filter design method, such that the resulting filtering error system is regular, casual, and stochastically stable, and a prescribed l2-l∞ attenuation level is satisfied. For the autonomous discrete-time descriptor system subject to Lipschitz nonlinear condition, by introducing some slack matrix variables, a mode-dependent stability criterion is established. It cannot only ensure the regularity, casuality, and stochastic stability of system, but also guarantee the considered system has a unique solution. Based on this obtained criterion, a sufficient condition in terms of linear matrix inequalities (LMIs) is derived, such that the resulting filtering error system is regular, casual, stochastically stable while satisfying a given l2-l∞ performance index. Further, the nonlinear mode-dependent l2-l∞ filter design method is proposed, and by solving a set of LMIs, the desired filter gain matrices are also explicitly given. Finally, a numerical example is included to illustrate the effectiveness of our proposed approach.

  19. Algorithms for the Markov entropy decomposition

    NASA Astrophysics Data System (ADS)

    Ferris, Andrew J.; Poulin, David

    2013-05-01

    The Markov entropy decomposition (MED) is a recently proposed, cluster-based simulation method for finite temperature quantum systems with arbitrary geometry. In this paper, we detail numerical algorithms for performing the required steps of the MED, principally solving a minimization problem with a preconditioned Newton's algorithm, as well as how to extract global susceptibilities and thermal responses. We demonstrate the power of the method with the spin-1/2 XXZ model on the 2D square lattice, including the extraction of critical points and details of each phase. Although the method shares some qualitative similarities with exact diagonalization, we show that the MED is both more accurate and significantly more flexible.

  20. Spectral Design in Markov Random Fields

    NASA Astrophysics Data System (ADS)

    Wang, Jiao; Thibault, Jean-Baptiste; Yu, Zhou; Sauer, Ken; Bouman, Charles

    2011-03-01

    Markov random fields (MRFs) have been shown to be a powerful and relatively compact stochastic model for imagery in the context of Bayesian estimation. The simplicity of their conventional embodiment implies local computation in iterative processes and relatively noncommittal statistical descriptions of image ensembles, resulting in stable estimators, particularly under models with strictly convex potential functions. This simplicity may be a liability, however, when the inherent bias of minimum mean-squared error or maximum a posteriori probability (MAP) estimators attenuate all but the lowest spatial frequencies. In this paper we explore generalization of MRFs by considering frequency-domain design of weighting coefficients which describe strengths of interconnections between clique members.

  1. Hybrid Discrete-Continuous Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich

    2003-01-01

    This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.

  2. Markov Chain Analysis of Musical Dice Games

    NASA Astrophysics Data System (ADS)

    Volchenkov, D.; Dawin, J. R.

    2012-07-01

    A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.

  3. Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.

    2015-12-01

    This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.

  4. Assistive system for people with Apraxia using a Markov decision process.

    PubMed

    Jean-Baptiste, Emilie M D; Russell, Martin; Rothstein, Pia

    2014-01-01

    CogWatch is an assistive system to re-train stroke survivors suffering from Apraxia or Action Disorganization Syndrome (AADS) to complete activities of daily living (ADLs). This paper describes the approach to real-time planning based on a Markov Decision Process (MDP), and demonstrates its ability to improve task's performance via user simulation. The paper concludes with a discussion of the remaining challenges and future enhancements.

  5. Markov Model of Accident Progression at Fukushima Daiichi

    SciTech Connect

    Cuadra A.; Bari R.; Cheng, L-Y; Ginsberg, T.; Lehner, J.; Martinez-Guridi, G.; Mubayi, V.; Pratt, T.; Yue, M.

    2012-11-11

    On March 11, 2011, a magnitude 9.0 earthquake followed by a tsunami caused loss of offsite power and disabled the emergency diesel generators, leading to a prolonged station blackout at the Fukushima Daiichi site. After successful reactor trip for all operating reactors, the inability to remove decay heat over an extended period led to boil-off of the water inventory and fuel uncovery in Units 1-3. A significant amount of metal-water reaction occurred, as evidenced by the quantities of hydrogen generated that led to hydrogen explosions in the auxiliary buildings of the Units 1 & 3, and in the de-fuelled Unit 4. Although it was assumed that extensive fuel damage, including fuel melting, slumping, and relocation was likely to have occurred in the core of the affected reactors, the status of the fuel, vessel, and drywell was uncertain. To understand the possible evolution of the accident conditions at Fukushima Daiichi, a Markov model of the likely state of one of the reactors was constructed and executed under different assumptions regarding system performance and reliability. The Markov approach was selected for several reasons: It is a probabilistic model that provides flexibility in scenario construction and incorporates time dependence of different model states. It also readily allows for sensitivity and uncertainty analyses of different failure and repair rates of cooling systems. While the analysis was motivated by a need to gain insight on the course of events for the damaged units at Fukushima Daiichi, the work reported here provides a more general analytical basis for studying and evaluating severe accident evolution over extended periods of time. This work was performed at the request of the U.S. Department of Energy to explore 'what-if' scenarios in the immediate aftermath of the accidents.

  6. Markov Mixed Effects Modeling Using Electronic Adherence Monitoring Records Identifies Influential Covariates to HIV Preexposure Prophylaxis.

    PubMed

    Madrasi, Kumpal; Chaturvedula, Ayyappa; Haberer, Jessica E; Sale, Mark; Fossler, Michael J; Bangsberg, David; Baeten, Jared M; Celum, Connie; Hendrix, Craig W

    2016-12-06

    Adherence is a major factor in the effectiveness of preexposure prophylaxis (PrEP) for HIV prevention. Modeling patterns of adherence helps to identify influential covariates of different types of adherence as well as to enable clinical trial simulation so that appropriate interventions can be developed. We developed a Markov mixed-effects model to understand the covariates influencing adherence patterns to daily oral PrEP. Electronic adherence records (date and time of medication bottle cap opening) from the Partners PrEP ancillary adherence study with a total of 1147 subjects were used. This study included once-daily dosing regimens of placebo, oral tenofovir disoproxil fumarate (TDF), and TDF in combination with emtricitabine (FTC), administered to HIV-uninfected members of serodiscordant couples. One-coin and first- to third-order Markov models were fit to the data using NONMEM(®) 7.2. Model selection criteria included objective function value (OFV), Akaike information criterion (AIC), visual predictive checks, and posterior predictive checks. Covariates were included based on forward addition (α = 0.05) and backward elimination (α = 0.001). Markov models better described the data than 1-coin models. A third-order Markov model gave the lowest OFV and AIC, but the simpler first-order model was used for covariate model building because no additional benefit on prediction of target measures was observed for higher-order models. Female sex and older age had a positive impact on adherence, whereas Sundays, sexual abstinence, and sex with a partner other than the study partner had a negative impact on adherence. Our findings suggest adherence interventions should consider the role of these factors.

  7. Metagenomic Classification Using an Abstraction Augmented Markov Model

    PubMed Central

    Zhu, Xiujun (Sylvia)

    2016-01-01

    Abstract The abstraction augmented Markov model (AAMM) is an extension of a Markov model that can be used for the analysis of genetic sequences. It is developed using the frequencies of all possible consecutive words with same length (p-mers). This article will review the theory behind AAMM and apply the theory behind AAMM in metagenomic classification. PMID:26618474

  8. Lifting—A nonreversible Markov chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Vucelja, Marija

    2016-12-01

    Markov chain Monte Carlo algorithms are invaluable tools for exploring stationary properties of physical systems, especially in situations where direct sampling is unfeasible. Common implementations of Monte Carlo algorithms employ reversible Markov chains. Reversible chains obey detailed balance and thus ensure that the system will eventually relax to equilibrium, though detailed balance is not necessary for convergence to equilibrium. We review nonreversible Markov chains, which violate detailed balance and yet still relax to a given target stationary distribution. In particular cases, nonreversible Markov chains are substantially better at sampling than the conventional reversible Markov chains with up to a square root improvement in the convergence time to the steady state. One kind of nonreversible Markov chain is constructed from the reversible ones by enlarging the state space and by modifying and adding extra transition rates to create non-reversible moves. Because of the augmentation of the state space, such chains are often referred to as lifted Markov Chains. We illustrate the use of lifted Markov chains for efficient sampling on several examples. The examples include sampling on a ring, sampling on a torus, the Ising model on a complete graph, and the one-dimensional Ising model. We also provide a pseudocode implementation, review related work, and discuss the applicability of such methods.

  9. Limit measures for affine cellular automata on topological Markov subgroups

    NASA Astrophysics Data System (ADS)

    Maass, Alejandro; Martínez, Servet; Sobottka, Marcelo

    2006-09-01

    Consider a topological Markov subgroup which is ps-torsion (with p prime) and an affine cellular automaton defined on it. We show that the Cesàro mean of the iterates, by the automaton of a probability measure with complete connections and summable memory decay that is compatible with the topological Markov subgroup, converges to the Haar measure.

  10. Protein family classification using sparse markov transducers.

    PubMed

    Eskin, Eleazar; Noble, William Stafford; Singer, Yoram

    2003-01-01

    We present a method for classifying proteins into families based on short subsequences of amino acids using a new probabilistic model called sparse Markov transducers (SMT). We classify a protein by estimating probability distributions over subsequences of amino acids from the protein. Sparse Markov transducers, similar to probabilistic suffix trees, estimate a probability distribution conditioned on an input sequence. SMTs generalize probabilistic suffix trees by allowing for wild-cards in the conditioning sequences. Since substitutions of amino acids are common in protein families, incorporating wild-cards into the model significantly improves classification performance. We present two models for building protein family classifiers using SMTs. As protein databases become larger, data driven learning algorithms for probabilistic models such as SMTs will require vast amounts of memory. We therefore describe and use efficient data structures to improve the memory usage of SMTs. We evaluate SMTs by building protein family classifiers using the Pfam and SCOP databases and compare our results to previously published results and state-of-the-art protein homology detection methods. SMTs outperform previous probabilistic suffix tree methods and under certain conditions perform comparably to state-of-the-art protein homology methods.

  11. Unmixing hyperspectral images using Markov random fields

    SciTech Connect

    Eches, Olivier; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2011-03-14

    This paper proposes a new spectral unmixing strategy based on the normal compositional model that exploits the spatial correlations between the image pixels. The pure materials (referred to as endmembers) contained in the image are assumed to be available (they can be obtained by using an appropriate endmember extraction algorithm), while the corresponding fractions (referred to as abundances) are estimated by the proposed algorithm. Due to physical constraints, the abundances have to satisfy positivity and sum-to-one constraints. The image is divided into homogeneous distinct regions having the same statistical properties for the abundance coefficients. The spatial dependencies within each class are modeled thanks to Potts-Markov random fields. Within a Bayesian framework, prior distributions for the abundances and the associated hyperparameters are introduced. A reparametrization of the abundance coefficients is proposed to handle the physical constraints (positivity and sum-to-one) inherent to hyperspectral imagery. The parameters (abundances), hyperparameters (abundance mean and variance for each class) and the classification map indicating the classes of all pixels in the image are inferred from the resulting joint posterior distribution. To overcome the complexity of the joint posterior distribution, Markov chain Monte Carlo methods are used to generate samples asymptotically distributed according to the joint posterior of interest. Simulations conducted on synthetic and real data are presented to illustrate the performance of the proposed algorithm.

  12. Non-Markov effects in intersecting sprays

    NASA Astrophysics Data System (ADS)

    Panchagnula, Mahesh; Kumaran, Dhivyaraja; Deevi, Sri Vallabha; Tangirala, Arun

    2016-11-01

    Sprays have been assumed to follow a Markov process. In this study, we revisit that assumption relying on experimental data from intersecting and non-intersecting sprays. A phase Doppler Particle Analyzer (PDPA) is used to measure particle diameter and velocity at various axial locations in the intersection region of two sprays. Measurements of single sprays, with one nozzle turned off alternatively are also obtained at the same locations. This data, treated as an unstructured time series is classified into three bins each for diameter (small, medium, large) and velocity (slow, medium, fast). Conditional probability analysis on this binned data showed a higher static correlation between droplet velocities, while diameter correlation is significantly alleviated (reduced) in intersecting sprays, compared to single sprays. Further analysis using serial correlation measures: auto-correlation function (ACF) and partial auto-correlation function (PACF) shows that the lagged correlations in droplet velocity are enhanced while those in the droplet diameter are significantly debilitated in intersecting sprays. We show that sprays are not necessarily Markov processes and that memory persists, even though curtailed to fewer lags in case of size, and enhanced in case of droplet velocity.

  13. Sunspots and ENSO relationship using Markov method

    NASA Astrophysics Data System (ADS)

    Hassan, Danish; Iqbal, Asif; Ahmad Hassan, Syed; Abbas, Shaheen; Ansari, Muhammad Rashid Kamal

    2016-01-01

    The various techniques have been used to confer the existence of significant relations between the number of Sunspots and different terrestrial climate parameters such as rainfall, temperature, dewdrops, aerosol and ENSO etc. Improved understanding and modelling of Sunspots variations can explore the information about the related variables. This study uses a Markov chain method to find the relations between monthly Sunspots and ENSO data of two epochs (1996-2009 and 1950-2014). Corresponding transition matrices of both data sets appear similar and it is qualitatively evaluated by high values of 2-dimensional correlation found between transition matrices of ENSO and Sunspots. The associated transition diagrams show that each state communicates with the others. Presence of stronger self-communication (between same states) confirms periodic behaviour among the states. Moreover, closeness found in the expected number of visits from one state to the other show the existence of a possible relation between Sunspots and ENSO data. Moreover, perfect validation of dependency and stationary tests endorses the applicability of the Markov chain analyses on Sunspots and ENSO data. This shows that a significant relation between Sunspots and ENSO data exists. Improved understanding and modelling of Sunspots variations can help to explore the information about the related variables. This study can be useful to explore the influence of ENSO related local climatic variability.

  14. Neyman, Markov processes and survival analysis.

    PubMed

    Yang, Grace

    2013-07-01

    J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.

  15. Analysis and comparison of CVS-ADC approaches up to third order for the calculation of core-excited states

    NASA Astrophysics Data System (ADS)

    Wenzel, Jan; Holzer, Andre; Wormit, Michael; Dreuw, Andreas

    2015-06-01

    The extended second order algebraic-diagrammatic construction (ADC(2)-x) scheme for the polarization operator in combination with core-valence separation (CVS) approximation is well known to be a powerful quantum chemical method for the calculation of core-excited states and the description of X-ray absorption spectra. For the first time, the implementation and results of the third order approach CVS-ADC(3) are reported. Therefore, the CVS approximation has been applied to the ADC(3) working equations and the resulting terms have been implemented efficiently in the adcman program. By treating the α and β spins separately from each other, the unrestricted variant CVS-UADC(3) for the treatment of open-shell systems has been implemented as well. The performance and accuracy of the CVS-ADC(3) method are demonstrated with respect to a set of small and middle-sized organic molecules. Therefore, the results obtained at the CVS-ADC(3) level are compared with CVS-ADC(2)-x values as well as experimental data by calculating complete basis set limits. The influence of basis sets is further investigated by employing a large set of different basis sets. Besides the accuracy of core-excitation energies and oscillator strengths, the importance of cartesian basis functions and the treatment of orbital relaxation effects are analyzed in this work as well as computational timings. It turns out that at the CVS-ADC(3) level, the results are not further improved compared to CVS-ADC(2)-x and experimental data, because the fortuitous error compensation inherent in the CVS-ADC(2)-x approach is broken. While CVS-ADC(3) overestimates the core excitation energies on average by 0.61% ± 0.31%, CVS-ADC(2)-x provides an averaged underestimation of -0.22% ± 0.12%. Eventually, the best agreement with experiments can be achieved using the CVS-ADC(2)-x method in combination with a diffuse cartesian basis set at least at the triple-ζ level.

  16. Analysis and comparison of CVS-ADC approaches up to third order for the calculation of core-excited states.

    PubMed

    Wenzel, Jan; Holzer, Andre; Wormit, Michael; Dreuw, Andreas

    2015-06-07

    The extended second order algebraic-diagrammatic construction (ADC(2)-x) scheme for the polarization operator in combination with core-valence separation (CVS) approximation is well known to be a powerful quantum chemical method for the calculation of core-excited states and the description of X-ray absorption spectra. For the first time, the implementation and results of the third order approach CVS-ADC(3) are reported. Therefore, the CVS approximation has been applied to the ADC(3) working equations and the resulting terms have been implemented efficiently in the adcman program. By treating the α and β spins separately from each other, the unrestricted variant CVS-UADC(3) for the treatment of open-shell systems has been implemented as well. The performance and accuracy of the CVS-ADC(3) method are demonstrated with respect to a set of small and middle-sized organic molecules. Therefore, the results obtained at the CVS-ADC(3) level are compared with CVS-ADC(2)-x values as well as experimental data by calculating complete basis set limits. The influence of basis sets is further investigated by employing a large set of different basis sets. Besides the accuracy of core-excitation energies and oscillator strengths, the importance of cartesian basis functions and the treatment of orbital relaxation effects are analyzed in this work as well as computational timings. It turns out that at the CVS-ADC(3) level, the results are not further improved compared to CVS-ADC(2)-x and experimental data, because the fortuitous error compensation inherent in the CVS-ADC(2)-x approach is broken. While CVS-ADC(3) overestimates the core excitation energies on average by 0.61% ± 0.31%, CVS-ADC(2)-x provides an averaged underestimation of -0.22% ± 0.12%. Eventually, the best agreement with experiments can be achieved using the CVS-ADC(2)-x method in combination with a diffuse cartesian basis set at least at the triple-ζ level.

  17. An optimal control approach to probabilistic Boolean networks

    NASA Astrophysics Data System (ADS)

    Liu, Qiuli

    2012-12-01

    External control of some genes in a genetic regulatory network is useful for avoiding undesirable states associated with some diseases. For this purpose, a number of stochastic optimal control approaches have been proposed. Probabilistic Boolean networks (PBNs) as powerful tools for modeling gene regulatory systems have attracted considerable attention in systems biology. In this paper, we deal with a problem of optimal intervention in a PBN with the help of the theory of discrete time Markov decision process. Specifically, we first formulate a control model for a PBN as a first passage model for discrete time Markov decision processes and then find, using a value iteration algorithm, optimal effective treatments with the minimal expected first passage time over the space of all possible treatments. In order to demonstrate the feasibility of our approach, an example is also displayed.

  18. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  19. Predictive glycoengineering of biosimilars using a Markov chain glycosylation model.

    PubMed

    Spahn, Philipp N; Hansen, Anders H; Kol, Stefan; Voldborg, Bjørn G; Lewis, Nathan E

    2017-02-01

    Biosimilar drugs must closely resemble the pharmacological attributes of innovator products to ensure safety and efficacy to obtain regulatory approval. Glycosylation is one critical quality attribute that must be matched, but it is inherently difficult to control due to the complexity of its biogenesis. This usually implies that costly and time-consuming experimentation is required for clone identification and optimization of biosimilar glycosylation. Here, a computational method that utilizes a Markov model of glycosylation to predict optimal glycoengineering strategies to obtain a specific glycosylation profile with desired properties is described. The approach uses a genetic algorithm to find the required quantities to perturb glycosylation reaction rates that lead to the best possible match with a given glycosylation profile. Furthermore, the approach can be used to identify cell lines and clones that will require minimal intervention while achieving a glycoprofile that is most similar to the desired profile. Thus, this approach can facilitate biosimilar design by providing computational glycoengineering guidelines that can be generated with a minimal time and cost.

  20. Markov chain Monte Carlo: an introduction for epidemiologists.

    PubMed

    Hamra, Ghassan; MacLehose, Richard; Richardson, David

    2013-04-01

    Markov Chain Monte Carlo (MCMC) methods are increasingly popular among epidemiologists. The reason for this may in part be that MCMC offers an appealing approach to handling some difficult types of analyses. Additionally, MCMC methods are those most commonly used for Bayesian analysis. However, epidemiologists are still largely unfamiliar with MCMC. They may lack familiarity either with he implementation of MCMC or with interpretation of the resultant output. As with tutorials outlining the calculus behind maximum likelihood in previous decades, a simple description of the machinery of MCMC is needed. We provide an introduction to conducting analyses with MCMC, and show that, given the same data and under certain model specifications, the results of an MCMC simulation match those of methods based on standard maximum-likelihood estimation (MLE). In addition, we highlight examples of instances in which MCMC approaches to data analysis provide a clear advantage over MLE. We hope that this brief tutorial will encourage epidemiologists to consider MCMC approaches as part of their analytic tool-kit.

  1. Harnessing graphical structure in Markov chain Monte Carlo learning

    SciTech Connect

    Stolorz, P.E.; Chew P.C.

    1996-12-31

    The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.

  2. Multidimensional Latent Markov Models in a Developmental Study of Inhibitory Control and Attentional Flexibility in Early Childhood

    ERIC Educational Resources Information Center

    Bartolucci, Francesco; Solis-Trapala, Ivonne L.

    2010-01-01

    We demonstrate the use of a multidimensional extension of the latent Markov model to analyse data from studies with repeated binary responses in developmental psychology. In particular, we consider an experiment based on a battery of tests which was administered to pre-school children, at three time periods, in order to measure their inhibitory…

  3. Monte Carlo estimation of total variation distance of Markov chains on large spaces, with application to phylogenetics.

    PubMed

    Herbei, Radu; Kubatko, Laura

    2013-03-26

    Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.

  4. Preserving spatial linear correlations between neighboring stations in simulating daily precipitation using extended Markov models

    NASA Astrophysics Data System (ADS)

    Ababaei, Behnam; Sohrabi, Teymour; Mirzaei, Farhad

    2014-10-01

    Most stochastic weather generators have their focus on precipitation because it is the most important variable affecting environmental processes. One of the methods to reproduce the precipitation occurrence time series is to use a Markov process. But, in addition to the simulation of short-term autocorrelations in one station, it is sometimes important to preserve the spatial linear correlations (SLC) between neighboring stations as well. In this research, an extension of one-site Markov models was proposed to preserve the SLC between neighboring stations. Qazvin station was utilized as the reference station and Takestan (TK), Magsal, Nirougah, and Taleghan stations were used as the target stations. The performances of different models were assessed in relation to the simulation of dry and wet spells and short-term dependencies in precipitation time series. The results revealed that in TK station, a Markov model with a first-order spatial model could be selected as the best model, while in the other stations, a model with the order of two or three could be selected. The selected (i.e., best) models were assessed in relation to preserving the SLC between neighboring stations. The results depicted that these models were very capable in preserving the SLC between the reference station and any of the target stations. But, their performances were weaker when the SLC between the other stations were compared. In order to resolve this issue, spatially correlated random numbers were utilized instead of independent random numbers while generating synthetic time series using the Markov models. Although this method slightly reduced the model performances in relation to dry and wet spells and short-term dependencies, the improvements related to the simulation of the SLC between the other stations were substantial.

  5. Accelerating Information Retrieval from Profile Hidden Markov Model Databases

    PubMed Central

    Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases. PMID:27875548

  6. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    PubMed

    Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  7. Markov and non-Markov processes in complex systems by the dynamical information entropy

    NASA Astrophysics Data System (ADS)

    Yulmetyev, R. M.; Gafarov, F. M.

    1999-12-01

    We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.

  8. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Fagundo, Arturo

    1994-01-01

    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  9. Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression

    PubMed Central

    Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.

    2016-01-01

    The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571

  10. Is anoxic depolarisation associated with an ADC threshold? A Markov chain Monte Carlo analysis.

    PubMed

    King, Martin D; Crowder, Martin J; Hand, David J; Harris, Neil G; Williams, Stephen R; Obrenovitch, Tihomir P; Gadian, David G

    2005-12-01

    A Bayesian nonlinear hierarchical random coefficients model was used in a reanalysis of a previously published longitudinal study of the extracellular direct current (DC)-potential and apparent diffusion coefficient (ADC) responses to focal ischaemia. The main purpose was to examine the data for evidence of an ADC threshold for anoxic depolarisation. A Markov chain Monte Carlo simulation approach was adopted. The Metropolis algorithm was used to generate three parallel Markov chains and thus obtain a sampled posterior probability distribution for each of the DC-potential and ADC model parameters, together with a number of derived parameters. The latter were used in a subsequent threshold analysis. The analysis provided no evidence indicating a consistent and reproducible ADC threshold for anoxic depolarisation.

  11. Modeling and computing of stock index forecasting based on neural network and Markov chain.

    PubMed

    Dai, Yonghui; Han, Dongmei; Dai, Weihui

    2014-01-01

    The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market.

  12. Bayesian parameter inference for stochastic biochemical network models using particle Markov chain Monte Carlo

    PubMed Central

    Golightly, Andrew; Wilkinson, Darren J.

    2011-01-01

    Computational systems biology is concerned with the development of detailed mechanistic models of biological processes. Such models are often stochastic and analytically intractable, containing uncertain parameters that must be estimated from time course data. In this article, we consider the task of inferring the parameters of a stochastic kinetic model defined as a Markov (jump) process. Inference for the parameters of complex nonlinear multivariate stochastic process models is a challenging problem, but we find here that algorithms based on particle Markov chain Monte Carlo turn out to be a very effective computationally intensive approach to the problem. Approximations to the inferential model based on stochastic differential equations (SDEs) are considered, as well as improvements to the inference scheme that exploit the SDE structure. We apply the methodology to a Lotka–Volterra system and a prokaryotic auto-regulatory network. PMID:23226583

  13. Finite-time H∞ synchronization for complex networks with semi-Markov jump topology

    NASA Astrophysics Data System (ADS)

    Shen, Hao; Park, Ju H.; Wu, Zheng-Guang; Zhang, Zhengqiang

    2015-07-01

    This paper investigates the problem of finite-time H∞ synchronization for complex networks with time-varying delays and semi-Markov jump topology. The network topologies are assumed to switch from one to another at different instants. Such a switching is governed by a semi-Markov process which are time-varying and dependent on the sojourn-time h. Attention is focused on proposing some synchronization criteria guaranteeing the underlying network is stochastically finite-time H∞ synchronized. By using the properties of Kronecker product combined with the Lyapunov-Krasovskii method, the solutions to the finite-time H∞ synchronization problem are formulated in the form of low-dimensional linear matrix inequalities. Finally, a numerical example is given to demonstrate the effectiveness of our proposed approach.

  14. Markov Chain analysis of turbiditic facies and flow dynamics (Magura Zone, Outer Western Carpathians, NW Slovakia)

    NASA Astrophysics Data System (ADS)

    Staňová, Sidónia; Soták, Ján; Hudec, Norbert

    2009-08-01

    Methods based on the Markov Chains can be easily applied in the evaluation of order in sedimentary sequences. In this contribution Markov Chain analysis was applied to analysis of turbiditic formation of the Outer Western Carpathians in NW Slovakia, although it also has broader utilization in the interpretation of sedimentary sequences from other depositional environments. Non-random facies transitions were determined in the investigated strata and compared to the standard deep-water facies models to provide statistical evidence for the sedimentological interpretation of depositional processes. As a result, six genetic facies types, interpreted in terms of depositional processes, were identified. They comprise deposits of density flows, turbidity flows, suspension fallout as well as units which resulted from syn- or post-depositional deformation.

  15. Identification of observer/Kalman filter Markov parameters: Theory and experiments

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh; Horta, Lucas G.; Longman, Richard W.

    1991-01-01

    An algorithm to compute Markov parameters of an observer or Kalman filter from experimental input and output data is discussed. The Markov parameters can then be used for identification of a state space representation, with associated Kalman gain or observer gain, for the purpose of controller design. The algorithm is a non-recursive matrix version of two recursive algorithms developed in previous works for different purposes. The relationship between these other algorithms is developed. The new matrix formulation here gives insight into the existence and uniqueness of solutions of certain equations and gives bounds on the proper choice of observer order. It is shown that if one uses data containing noise, and seeks the fastest possible deterministic observer, the deadbeat observer, one instead obtains the Kalman filter, which is the fastest possible observer in the stochastic environment. Results are demonstrated in numerical studies and in experiments on an ten-bay truss structure.

  16. Identification of observer/Kalman filter Markov parameters - Theory and experiments

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh; Horta, Lucas G.; Longman, Richard W.

    1991-01-01

    An algorithm to compute Markov parameters of an observer or Kalman filter from experimental input and output data is discussed. The Markov parameters can then be used for identification of a state space representation, with associated Kalman gain or observer gain, for the purpose of controller design. The algorithm is a non-recursive matrix version of two recursive algorithms developed in previous works for different purposes. The relationship between these other algorithms is developed. The new matrix formulation here gives insight into the existence and uniqueness of solutions of certain equations and gives bounds on the proper choice of observer order. It is shown that if one uses data containing noise, and seeks the fastest possible deterministic observer, the deadbeat observer, one instead obtains the Kalman filter, which is the fastest possible observer in the stochastic environment. Results are demonstrated in numerical studies and in experiments on a ten-bay truss structure.

  17. Transition-Independent Decentralized Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Becker, Raphen; Silberstein, Shlomo; Lesser, Victor; Goldman, Claudia V.; Morris, Robert (Technical Monitor)

    2003-01-01

    There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied up by a global reward function that depends on both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.

  18. Probabilistic Resilience in Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Panerati, Jacopo; Beltrame, Giovanni; Schwind, Nicolas; Zeltner, Stefan; Inoue, Katsumi

    2016-05-01

    Originally defined in the context of ecological systems and environmental sciences, resilience has grown to be a property of major interest for the design and analysis of many other complex systems: resilient networks and robotics systems other the desirable capability of absorbing disruption and transforming in response to external shocks, while still providing the services they were designed for. Starting from an existing formalization of resilience for constraint-based systems, we develop a probabilistic framework based on hidden Markov models. In doing so, we introduce two new important features: stochastic evolution and partial observability. Using our framework, we formalize a methodology for the evaluation of probabilities associated with generic properties, we describe an efficient algorithm for the computation of its essential inference step, and show that its complexity is comparable to other state-of-the-art inference algorithms.

  19. Markov state models and molecular alchemy

    NASA Astrophysics Data System (ADS)

    Schütte, Christof; Nielsen, Adam; Weber, Marcus

    2015-01-01

    In recent years, Markov state models (MSMs) have attracted a considerable amount of attention with regard to modelling conformation changes and associated function of biomolecular systems. They have been used successfully, e.g. for peptides including time-resolved spectroscopic experiments, protein function and protein folding , DNA and RNA, and ligand-receptor interaction in drug design and more complicated multivalent scenarios. In this article, a novel reweighting scheme is introduced that allows to construct an MSM for certain molecular system out of an MSM for a similar system. This permits studying how molecular properties on long timescales differ between similar molecular systems without performing full molecular dynamics simulations for each system under consideration. The performance of the reweighting scheme is illustrated for simple test cases, including one where the main wells of the respective energy landscapes are located differently and an alchemical transformation of butane to pentane where the dimension of the state space is changed.

  20. Multivariate Markov chain modeling for stock markets

    NASA Astrophysics Data System (ADS)

    Maskawa, Jun-ichi

    2003-06-01

    We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.

  1. Estimation and uncertainty of reversible Markov models

    NASA Astrophysics Data System (ADS)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-01

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software — http://pyemma.org — as of version 2.0.

  2. Efficient inference of hidden Markov models from large observation sequences

    NASA Astrophysics Data System (ADS)

    Priest, Benjamin W.; Cybenko, George

    2016-05-01

    The hidden Markov model (HMM) is widely used to model time series data. However, the conventional Baum- Welch algorithm is known to perform poorly when applied to long observation sequences. The literature contains several alternatives that seek to improve the memory or time complexity of the algorithm. However, for an HMM with N states and an observation sequence of length T, these alternatives require at best O(N) space and O(N2T) time. Given the preponderance of applications that increasingly deal with massive amounts of data, an alternative whose time is O(T)+poly(N) is desired. Recent research presents an alternative to the Baum-Welch algorithm that relies on nonnegative matrix factorization. This document examines the space complexity of this alternative approach and proposes further optimizations using approaches adopted from the matrix sketching literature. The result is a streaming algorithm whose space complexity is constant and time complexity is linear with respect to the size of the observation sequence. The paper also presents a batch algorithm that allow for even further improved space complexity at the expense of an additional pass over the observation sequence.

  3. Differential evolution Markov chain with snooker updater and fewer chains

    SciTech Connect

    Vrugt, Jasper A; Ter Braak, Cajo J F

    2008-01-01

    Differential Evolution Markov Chain (DE-MC) is an adaptive MCMC algorithm, in which multiple chains are run in parallel. Standard DE-MC requires at least N=2d chains to be run in parallel, where d is the dimensionality of the posterior. This paper extends DE-MC with a snooker updater and shows by simulation and real examples that DE-MC can work for d up to 50--100 with fewer parallel chains (e.g. N=3) by exploiting information from their past by generating jumps from differences of pairs of past states. This approach extends the practical applicability of DE-MC and is shown to be about 5--26 times more efficient than the optimal Normal random walk Metropolis sampler for the 97.5% point of a variable from a 25--50 dimensional Student T{sub 3} distribution. In a nonlinear mixed effects model example the approach outperformed a block-updater geared to the specific features of the model.

  4. Goal Management in Organizations: A Markov Decision Process (MDP) Approach

    DTIC Science & Technology

    2005-01-01

    model by introducing novel problem domain-specific heuristic evaluation functions ( HEF ) to aid the search process. We employ the optimal AO* search...introducing novel problem domain-specific heuristic evaluation functions ( HEF ) to aid the search process. We employ the optimal AO* search and two...algorithm, and greedy heuristics. Novel problem domain-based heuristic evaluation functions ( HEFs ) are introduced and evidence of their admissibility is

  5. Forecasting Tehran stock exchange volatility; Markov switching GARCH approach

    NASA Astrophysics Data System (ADS)

    Abounoori, Esmaiel; Elmi, Zahra (Mila); Nademi, Younes

    2016-03-01

    This paper evaluates several GARCH models regarding their ability to forecast volatility in Tehran Stock Exchange (TSE). These include GARCH models with both Gaussian and fat-tailed residual conditional distribution, concerning their ability to describe and forecast volatility from 1-day to 22-day horizon. Results indicate that AR(2)-MRSGARCH-GED model outperforms other models at one-day horizon. Also, the AR(2)-MRSGARCH-GED as well as AR(2)-MRSGARCH-t models outperform other models at 5-day horizon. In 10 day horizon, three models of AR(2)-MRSGARCH outperform other models. Concerning 22 day forecast horizon, results indicate no differences between MRSGARCH models with that of standard GARCH models. Regarding Risk management out-of-sample evaluation (95% VaR), a few models seem to provide reasonable and accurate VaR estimates at 1-day horizon, with a coverage rate close to the nominal level. According to the risk management loss functions, there is not a uniformly most accurate model.

  6. Potential of Entropic Force in Markov Systems with Nonequilibrium Steady State, Generalized Gibbs Function and Criticality

    SciTech Connect

    Thompson, Lowell; Qian, Hong

    2016-08-01

    In this paper we revisit the notion of the “minus logarithm of stationary probability” as a generalized potential in nonequilibrium systems and attempt to illustrate its central role in an axiomatic approach to stochastic nonequilibrium thermodynamics of complex systems. It is demonstrated that this quantity arises naturally through both monotonicity results of Markov processes and as the rate function when a stochastic process approaches a detrministic limit. We then undertake a more detailed mathematical analysis of the consequences of this quantity, culminating in a necessary and sufficient condition for the criticality of stochastic systems. This condition is then discussed in the context of recent results about criticality in biological systems.

  7. Bayesian and Markov chain Monte Carlo methods for identifying nonlinear systems in the presence of uncertainty

    PubMed Central

    Green, P. L.; Worden, K.

    2015-01-01

    In this paper, the authors outline the general principles behind an approach to Bayesian system identification and highlight the benefits of adopting a Bayesian framework when attempting to identify models of nonlinear dynamical systems in the presence of uncertainty. It is then described how, through a summary of some key algorithms, many of the potential difficulties associated with a Bayesian approach can be overcome through the use of Markov chain Monte Carlo (MCMC) methods. The paper concludes with a case study, where an MCMC algorithm is used to facilitate the Bayesian system identification of a nonlinear dynamical system from experimentally observed acceleration time histories. PMID:26303916

  8. Independent component feature-based human activity recognition via Linear Discriminant Analysis and Hidden Markov Model.

    PubMed

    Uddin, Md; Lee, J J; Kim, T S

    2008-01-01

    In proactive computing, human activity recognition from image sequences is an active research area. This paper presents a novel approach of human activity recognition based on Linear Discriminant Analysis (LDA) of Independent Component (IC) features from shape information. With extracted features, Hidden Markov Model (HMM) is applied for training and recognition. The recognition performance using LDA of IC features has been compared to other approaches including Principle Component Analysis (PCA), LDA of PC, and ICA. The preliminary results show much improved performance in the recognition rate with our proposed method.

  9. The Acquisition of Neg-V and V-Neg Order in Embedded Clauses in Swedish: A Microparametric Approach

    ERIC Educational Resources Information Center

    Waldmann, Christian

    2014-01-01

    This article examines the acquisition of embedded verb placement in Swedish children, focusing on Neg-V and V-Neg order. It is proposed that a principle of economy of movement creates an overuse of V-Neg order in embedded clauses and that the low frequency of the target-consistent Neg-V order in child-directed speech obstructs children from…

  10. Universal recovery map for approximate Markov chains

    PubMed Central

    Sutter, David; Fawzi, Omar; Renner, Renato

    2016-01-01

    A central question in quantum information theory is to determine how well lost information can be reconstructed. Crucially, the corresponding recovery operation should perform well without knowing the information to be reconstructed. In this work, we show that the quantum conditional mutual information measures the performance of such recovery operations. More precisely, we prove that the conditional mutual information I(A:C|B) of a tripartite quantum state ρABC can be bounded from below by its distance to the closest recovered state RB→BC(ρAB), where the C-part is reconstructed from the B-part only and the recovery map RB→BC merely depends on ρBC. One particular application of this result implies the equivalence between two different approaches to define topological order in quantum systems. PMID:27118889

  11. Markov Task Network: A Framework for Service Composition under Uncertainty in Cyber-Physical Systems.

    PubMed

    Mohammed, Abdul-Wahid; Xu, Yang; Hu, Haixiao; Agyemang, Brighter

    2016-09-21

    In novel collaborative systems, cooperative entities collaborate services to achieve local and global objectives. With the growing pervasiveness of cyber-physical systems, however, such collaboration is hampered by differences in the operations of the cyber and physical objects, and the need for the dynamic formation of collaborative functionality given high-level system goals has become practical. In this paper, we propose a cross-layer automation and management model for cyber-physical systems. This models the dynamic formation of collaborative services pursuing laid-down system goals as an ontology-oriented hierarchical task network. Ontological intelligence provides the semantic technology of this model, and through semantic reasoning, primitive tasks can be dynamically composed from high-level system goals. In dealing with uncertainty, we further propose a novel bridge between hierarchical task networks and Markov logic networks, called the Markov task network. This leverages the efficient inference algorithms of Markov logic networks to reduce both computational and inferential loads in task decomposition. From the results of our experiments, high-precision service composition under uncertainty can be achieved using this approach.

  12. Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions

    SciTech Connect

    Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G. E-mail: gerhard.hummer@biophys.mpg.de; Hummer, Gerhard E-mail: gerhard.hummer@biophys.mpg.de

    2014-09-21

    Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.

  13. Markov Task Network: A Framework for Service Composition under Uncertainty in Cyber-Physical Systems

    PubMed Central

    Mohammed, Abdul-Wahid; Xu, Yang; Hu, Haixiao; Agyemang, Brighter

    2016-01-01

    In novel collaborative systems, cooperative entities collaborate services to achieve local and global objectives. With the growing pervasiveness of cyber-physical systems, however, such collaboration is hampered by differences in the operations of the cyber and physical objects, and the need for the dynamic formation of collaborative functionality given high-level system goals has become practical. In this paper, we propose a cross-layer automation and management model for cyber-physical systems. This models the dynamic formation of collaborative services pursuing laid-down system goals as an ontology-oriented hierarchical task network. Ontological intelligence provides the semantic technology of this model, and through semantic reasoning, primitive tasks can be dynamically composed from high-level system goals. In dealing with uncertainty, we further propose a novel bridge between hierarchical task networks and Markov logic networks, called the Markov task network. This leverages the efficient inference algorithms of Markov logic networks to reduce both computational and inferential loads in task decomposition. From the results of our experiments, high-precision service composition under uncertainty can be achieved using this approach. PMID:27657084

  14. Simulation of world oil market shocks: a Markov analysis of OPEC and consumer behavior

    SciTech Connect

    Kosobud, R.F.; Stokes, H.H.

    1980-04-01

    A simulation model is developed which analyzes and estimates equilibrium market shares and explores their implications for agreement patterns among producers and consumers. Using a first-order Markov transition-matrix analysis of transactions in the world oil market, shocks to market shares comparable to the Iranian revolution are simulated with agreed-upon quota changes among consuming nations. A partition of the Markov matrix can be used to represent potential conflicts of interest on both sides of the market. The Markov analysis is shown to provide not only shares but also absolute price and quantity. If competition prevails in the world oil market, the oil reserves of the Organization of Petroleum Exporting Countries (OPEC) members would make up the bulk of immediately available and least-cost energy resources. The world economy in this situation would achieve the most-rational use of energy resources by using oil reserves until rising prices bring about the transition to the next major energy resource. 70 references, 8 tables.

  15. STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning

    PubMed Central

    Kappel, David; Nessler, Bernhard; Maass, Wolfgang

    2014-01-01

    In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task. PMID:24675787

  16. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    PubMed

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

  17. NonMarkov Ito Processes with 1- state memory

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2010-08-01

    A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.

  18. Asteroid mass estimation using Markov-Chain Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Siltala, Lauri; Granvik, Mikael

    2016-10-01

    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid using astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations into the OpenOrb asteroid-orbit-computation software: the very rough 'marching' approximation, in which the asteroid orbits are fixed at a given epoch, reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-Chain Monte Carlo (MCMC) approach. We will introduce each of these algorithms with particular focus on the MCMC algorithm, and present example results for both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans, particularly in connection with ESA's Gaia mission.

  19. Markov chain Monte Carlo methods: an introductory example

    NASA Astrophysics Data System (ADS)

    Klauenberg, Katy; Elster, Clemens

    2016-02-01

    When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.

  20. Bayesian Hidden Markov Modeling of Array CGH Data.

    PubMed

    Guha, Subharup; Li, Yi; Neuberg, Donna

    2008-06-01

    Genomic alterations have been linked to the development and progression of cancer. The technique of comparative genomic hybridization (CGH) yields data consisting of fluorescence intensity ratios of test and reference DNA samples. The intensity ratios provide information about the number of copies in DNA. Practical issues such as the contamination of tumor cells in tissue specimens and normalization errors necessitate the use of statistics for learning about the genomic alterations from array CGH data. As increasing amounts of array CGH data become available, there is a growing need for automated algorithms for characterizing genomic profiles. Specifically, there is a need for algorithms that can identify gains and losses in the number of copies based on statistical considerations, rather than merely detect trends in the data.We adopt a Bayesian approach, relying on the hidden Markov model to account for the inherent dependence in the intensity ratios. Posterior inferences are made about gains and losses in copy number. Localized amplifications (associated with oncogene mutations) and deletions (associated with mutations of tumor suppressors) are identified using posterior probabilities. Global trends such as extended regions of altered copy number are detected. Because the posterior distribution is analytically intractable, we implement a Metropolis-within-Gibbs algorithm for efficient simulation-based inference. Publicly available data on pancreatic adenocarcinoma, glioblastoma multiforme, and breast cancer are analyzed, and comparisons are made with some widely used algorithms to illustrate the reliability and success of the technique.

  1. Grasp Recognition by Fuzzy Modeling and Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Palm, Rainer; Iliev, Boyko; Kadmiry, Bourhane

    Grasp recognition is a major part of the approach for Programming-by-Demonstration (PbD) for five-fingered robotic hands. This chapter describes three different methods for grasp recognition for a human hand. A human operator wearing a data glove instructs the robot to perform different grasps. For a number of human grasps the finger joint angle trajectories are recorded and modeled by fuzzy clustering and Takagi-Sugeno modeling. This leads to grasp models using time as input parameter and joint angles as outputs. Given a test grasp by the human operator the robot classifies and recognizes the grasp and generates the corresponding robot grasp. Three methods for grasp recognition are compared with each other. In the first method, the test grasp is compared with model grasps using the difference between the model outputs. The second method deals with qualitative fuzzy models which used for recognition and classification. The third method is based on Hidden-Markov-Models (HMM) which are commonly used in robot learning.

  2. A comparison of weighted ensemble and Markov state model methodologies

    NASA Astrophysics Data System (ADS)

    Feng, Haoyun; Costaouec, Ronan; Darve, Eric; Izaguirre, Jesús A.

    2015-06-01

    Computation of reaction rates and elucidation of reaction mechanisms are two of the main goals of molecular dynamics (MD) and related simulation methods. Since it is time consuming to study reaction mechanisms over long time scales using brute force MD simulations, two ensemble methods, Markov State Models (MSMs) and Weighted Ensemble (WE), have been proposed to accelerate the procedure. Both approaches require clustering of microscopic configurations into networks of "macro-states" for different purposes. MSMs model a discretization of the original dynamics on the macro-states. Accuracy of the model significantly relies on the boundaries of macro-states. On the other hand, WE uses macro-states to formulate a resampling procedure that kills and splits MD simulations for achieving better efficiency of sampling. Comparing to MSMs, accuracy of WE rate predictions is less sensitive to the definition of macro-states. Rigorous numerical experiments using alanine dipeptide and penta-alanine support our analyses. It is shown that MSMs introduce significant biases in the computation of reaction rates, which depend on the boundaries of macro-states, and Accelerated Weighted Ensemble (AWE), a formulation of weighted ensemble that uses the notion of colors to compute fluxes, has reliable flux estimation on varying definitions of macro-states. Our results suggest that whereas MSMs provide a good idea of the metastable sets and visualization of overall dynamics, AWE provides reliable rate estimations requiring less efforts on defining macro-states on the high dimensional conformational space.

  3. Optical character recognition of handwritten Arabic using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.

    2011-04-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.

  4. Optical character recognition of handwritten Arabic using hidden Markov models

    SciTech Connect

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M

    2011-01-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.

  5. A new constant memory recursion for hidden Markov models.

    PubMed

    Bartolucci, Francesco; Pandolfi, Silvia

    2014-02-01

    We develop the recursion for hidden Markov (HM) models proposed by Bartolucci and Besag (2002), and we show how it may be used to implement an estimation algorithm for these models that requires an amount of memory not depending on the length of the observed series of data. This recursion allows us to obtain the conditional distribution of the latent state at every occasion, given the previous state and the observed data. With respect to the estimation algorithm based on the well-known Baum-Welch recursions, which requires an amount of memory that increases with the sample size, the proposed algorithm also has the advantage of not requiring dummy renormalizations to avoid numerical problems. Moreover, it directly allows us to perform global decoding of the latent sequence of states, without the need of a Viterbi method and with a consistent reduction of the memory requirement with respect to the latter. The proposed approach is compared, in terms of computing time and memory requirement, with the algorithm based on the Baum-Welch recursions and with the so-called linear memory algorithm of Churbanov and Winters-Hilt. The comparison is also based on a series of simulations involving an HM model for continuous time-series data.

  6. Continuous myoelectric control for powered prostheses using hidden Markov models.

    PubMed

    Chan, Adrian D C; Englehart, Kevin B

    2005-01-01

    This paper represents an ongoing investigation of dexterous and natural control of upper extremity prostheses using the myoelectric signal. The scheme described within uses a hidden Markov model (HMM) to process four channels of myoelectric signal, with the task of discriminating six classes of limb movement. The HMM-based approach is shown to be capable of higher classification accuracy than previous methods based upon multilayer perceptrons. The method does not require segmentation of the myoelectric signal data, allowing a continuous stream of class decisions to be delivered to a prosthetic device. Due to the fact that the classifier learns the muscle activation patterns for each desired class for each individual, a natural control actuation results. The continuous decision stream allows complex sequences of manipulation involving multiple joints to be performed without interruption. The computational complexity of the HMM in its operational mode is low, making it suitable for a real-time implementation. The low computational overhead associated with training the HMM also enables the possibility of adaptive classifier training while in use.

  7. Markov chain analysis of succession in a rocky subtidal community.

    PubMed

    Hill, M Forrest; Witman, Jon D; Caswell, Hal

    2004-08-01

    We present a Markov chain model of succession in a rocky subtidal community based on a long-term (1986-1994) study of subtidal invertebrates (14 species) at Ammen Rock Pinnacle in the Gulf of Maine. The model describes successional processes (disturbance, colonization, species persistence, and replacement), the equilibrium (stationary) community, and the rate of convergence. We described successional dynamics by species turnover rates, recurrence times, and the entropy of the transition matrix. We used perturbation analysis to quantify the response of diversity to successional rates and species removals. The equilibrium community was dominated by an encrusting sponge (Hymedesmia) and a bryozoan (Crisia eburnea). The equilibrium structure explained 98% of the variance in observed species frequencies. Dominant species have low probabilities of disturbance and high rates of colonization and persistence. On average, species turn over every 3.4 years. Recurrence times varied among species (7-268 years); rare species had the longest recurrence times. The community converged to equilibrium quickly (9.5 years), as measured by Dobrushin's coefficient of ergodicity. The largest changes in evenness would result from removal of the dominant sponge Hymedesmia. Subdominant species appear to increase evenness by slowing the dominance of Hymedesmia. Comparison of the subtidal community with intertidal and coral reef communities revealed that disturbance rates are an order of magnitude higher in coral reef than in rocky intertidal and subtidal communities. Colonization rates and turnover times, however, are lowest and longest in coral reefs, highest and shortest in intertidal communities, and intermediate in subtidal communities.

  8. Recursive recovery of Markov transition probabilities from boundary value data

    SciTech Connect

    Patch, Sarah Kathyrn

    1994-04-01

    In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requires finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.

  9. Markov source model for printed music decoding

    NASA Astrophysics Data System (ADS)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  10. Manpower planning using Markov Chain model

    NASA Astrophysics Data System (ADS)

    Saad, Syafawati Ab; Adnan, Farah Adibah; Ibrahim, Haslinda; Rahim, Rahela

    2014-07-01

    Manpower planning is a planning model which understands the flow of manpower based on the policies changes. For such purpose, numerous attempts have been made by researchers to develop a model to investigate the track of movements of lecturers for various universities. As huge number of lecturers in a university, it is difficult to track the movement of lecturers and also there is no quantitative way used in tracking the movement of lecturers. This research is aimed to determine the appropriate manpower model to understand the flow of lecturers in a university in Malaysia by determine the probability and mean time of lecturers remain in the same status rank. In addition, this research also intended to estimate the number of lecturers in different status rank (lecturer, senior lecturer and associate professor). From the previous studies, there are several methods applied in manpower planning model and appropriate method used in this research is Markov Chain model. Results obtained from this study indicate that the appropriate manpower planning model used is validated by compare to the actual data. The smaller margin of error gives a better result which means that the projection is closer to actual data. These results would give some suggestions for the university to plan the hiring lecturers and budgetary for university in future.

  11. Hidden Markov models in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Wrzoskowicz, Adam

    1993-11-01

    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  12. Metabolic flux distribution analysis by 13C-tracer experiments using the Markov chain-Monte Carlo method.

    PubMed

    Yang, J; Wongsa, S; Kadirkamanathan, V; Billings, S A; Wright, P C

    2005-12-01

    Metabolic flux analysis using 13C-tracer experiments is an important tool in metabolic engineering since intracellular fluxes are non-measurable quantities in vivo. Current metabolic flux analysis approaches are fully based on stoichiometric constraints and carbon atom balances, where the over-determined system is iteratively solved by a parameter estimation approach. However, the unavoidable measurement noises involved in the fractional enrichment data obtained by 13C-enrichment experiment and the possible existence of unknown pathways prevent a simple parameter estimation method for intracellular flux quantification. The MCMC (Markov chain-Monte Carlo) method, which obtains intracellular flux distributions through delicately constructed Markov chains, is shown to be an effective approach for deep understanding of the intracellular metabolic network. Its application is illustrated through the simulation of an example metabolic network.

  13. Multi-stream continuous hidden Markov models with application to landmine detection

    NASA Astrophysics Data System (ADS)

    Missaoui, Oualid; Frigui, Hichem; Gader, Paul

    2013-12-01

    We propose a multi-stream continuous hidden Markov model (MSCHMM) framework that can learn from multiple modalities. We assume that the feature space is partitioned into subspaces generated by different sources of information. In order to fuse the different modalities, the proposed MSCHMM introduces stream relevance weights. First, we modify the probability density function (pdf) that characterizes the standard continuous HMM to include state and component dependent stream relevance weights. The resulting pdf approximate is a linear combination of pdfs characterizing multiple modalities. Second, we formulate the CHMM objective function to allow for the simultaneous optimization of all model parameters including the relevance weights. Third, we generalize the maximum likelihood based Baum-Welch algorithm and the minimum classification error/gradient probabilistic descent (MCE/GPD) learning algorithms to include stream relevance weights. We propose two versions of the MSCHMM. The first one introduces the relevance weights at the state level while the second one introduces the weights at the component level. We illustrate the performance of the proposed MSCHMM structures using synthetic data sets. We also apply them to the problem of landmine detection using ground penetrating radar. We show that when the multiple sources of information are equally relevant across all training data, the performance of the proposed MSCHMM is comparable to the baseline CHMM. However, when the relevance of the sources varies, the MSCHMM outperforms the baseline CHMM because it can learn the optimal relevance weights. We also show that our approach outperforms existing multi-stream HMM because the latter one cannot optimize all model parameters simultaneously.

  14. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  15. Understanding agent-based models of financial markets: A bottom-up approach based on order parameters and phase diagrams

    NASA Astrophysics Data System (ADS)

    Lye, Ribin; Tan, James Peng Lung; Cheong, Siew Ann

    2012-11-01

    We describe a bottom-up framework, based on the identification of appropriate order parameters and determination of phase diagrams, for understanding progressively refined agent-based models and simulations of financial markets. We illustrate this framework by starting with a deterministic toy model, whereby N independent traders buy and sell M stocks through an order book that acts as a clearing house. The price of a stock increases whenever it is bought and decreases whenever it is sold. Price changes are updated by the order book before the next transaction takes place. In this deterministic model, all traders based their buy decisions on a call utility function, and all their sell decisions on a put utility function. We then make the agent-based model more realistic, by either having a fraction fb of traders buy a random stock on offer, or a fraction fs of traders sell a random stock in their portfolio. Based on our simulations, we find that it is possible to identify useful order parameters from the steady-state price distributions of all three models. Using these order parameters as a guide, we find three phases: (i) the dead market; (ii) the boom market; and (iii) the jammed market in the phase diagram of the deterministic model. Comparing the phase diagrams of the stochastic models against that of the deterministic model, we realize that the primary effect of stochasticity is to eliminate the dead market phase.

  16. Quantum Mechanical Molecular Interactions for Calculating the Excitation Energy in Molecular Environments: A First-Order Interacting Space Approach

    PubMed Central

    Hasegawa, Jun-ya; Yanai, Kazuma; Ishimura, Kazuya

    2015-01-01

    Intermolecular interactions regulate the molecular properties in proteins and solutions such as solvatochromic systems. Some of the interactions have to be described at an electronic-structure level. In this study, a commutator for calculating the excitation energy is used for deriving a first-order interacting space (FOIS) to describe the environmental response to solute excitation. The FOIS wave function for a solute-in-solvent cluster is solved by second-order perturbation theory. The contributions to the excitation energy are decomposed into each interaction and for each solvent. PMID:25393373

  17. Stochastic rainfall modeling in West Africa: Parsimonious approaches for domestic rainwater harvesting assessment

    NASA Astrophysics Data System (ADS)

    Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.

    2008-10-01

    SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.

  18. (Re)Acting Medicine: Applying Theatre in Order to Develop a Whole-Systems Approach to Understanding the Healing Response

    ERIC Educational Resources Information Center

    Goldingay, S.; Dieppe, P.; Mangan, M.; Marsden, D.

    2014-01-01

    This critical reflection is based on the belief that creative practitioners should be using their own well-established approaches to trouble dominant paradigms in health and care provision to both form and inform the future of healing provision and well-being creation. It describes work by a transdisciplinary team (drama and medicine) that is…

  19. A patterned and un-patterned minefield detection in cluttered environments using Markov marked point process

    NASA Astrophysics Data System (ADS)

    Trang, Anh; Agarwal, Sanjeev; Regalia, Phillip; Broach, Thomas; Smith, Thomas

    2007-04-01

    A typical minefield detection approach is based on a sequential processing employing mine detection and false alarm rejection followed by minefield detection. The current approach does not work robustly under different backgrounds and environment conditions because target signature changes with time and its performance degrades in the presence of high density of false alarms. The aim of this research will be to advance the state of the art in detection of both patterned and unpatterned minefield in high clutter environments. The proposed method seeks to combine false alarm rejection module and the minefield detection module of the current architecture by spatial-spectral clustering and inference module using a Markov Marked Point Process formulation. The approach simultaneously exploits the feature characteristics of the target signature and spatial distribution of the targets in the interrogation region. The method is based on the premise that most minefields can be characterized by some type of distinctive spatial distribution of "similar" looking mine targets. The minefield detection problem is formulated as a Markov Marked Point Process (MMPP) where the set of possible mine targets is divided into a possibly overlapping mixture of targets. The likelihood of the minefield depends simultaneously on feature characteristics of the target and their spatial distribution. A framework using "Belief Propagation" is developed to solve the minefield inference problem based on MMPP. Preliminary investigation using simulated data shows the efficacy of the approach.

  20. Machine remaining useful life prediction: An integrated adaptive neuro-fuzzy and high-order particle filtering approach

    NASA Astrophysics Data System (ADS)

    Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.

    2012-04-01

    Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.

  1. Cool walking: a new Markov chain Monte Carlo sampling method.

    PubMed

    Brown, Scott; Head-Gordon, Teresa

    2003-01-15

    Effective relaxation processes for difficult systems like proteins or spin glasses require special simulation techniques that permit barrier crossing to ensure ergodic sampling. Numerous adaptations of the venerable Metropolis Monte Carlo (MMC) algorithm have been proposed to improve its sampling efficiency, including various hybrid Monte Carlo (HMC) schemes, and methods designed specifically for overcoming quasi-ergodicity problems such as Jump Walking (J-Walking), Smart Walking (S-Walking), Smart Darting, and Parallel Tempering. We present an alternative to these approaches that we call Cool Walking, or C-Walking. In C-Walking two Markov chains are propagated in tandem, one at a high (ergodic) temperature and the other at a low temperature. Nonlocal trial moves for the low temperature walker are generated by first sampling from the high-temperature distribution, then performing a statistical quenching process on the sampled configuration to generate a C-Walking jump move. C-Walking needs only one high-temperature walker, satisfies detailed balance, and offers the important practical advantage that the high and low-temperature walkers can be run in tandem with minimal degradation of sampling due to the presence of correlations. To make the C-Walking approach more suitable to real problems we decrease the required number of cooling steps by attempting to jump at intermediate temperatures during cooling. We further reduce the number of cooling steps by utilizing "windows" of states when jumping, which improves acceptance ratios and lowers the average number of cooling steps. We present C-Walking results with comparisons to J-Walking, S-Walking, Smart Darting, and Parallel Tempering on a one-dimensional rugged potential energy surface in which the exact normalized probability distribution is known. C-Walking shows superior sampling as judged by two ergodic measures.

  2. First-order exchange coefficient coupling for simulating surface water-groundwater interactions: Parameter sensitivity and consistency with a physics-based approach

    USGS Publications Warehouse

    Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.

    2009-01-01

    Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.

  3. Ageneral approach to first order phase transitions and the anomalous behavior of coexisting phases in the magnetic case.

    SciTech Connect

    Gama, S.; de Campos, A.; Coelho, A. A.; Alves, C. S.; Ren, Y.; Garcia, F.; Brown, D. E.; da Silva, L. M.; Magnus, A.; Carvalho, G.; Gandra, G. C.; dos Santos, A. O.; Cardoso, L. P.; von Ranke, P. J.; X-Ray Science Division; Univ. Federal de Sao Paulo; Unv. Estadual de Champinas; Univ. Estadual de Maringa Lab. Nacional de Luz Sincrotron; Northern Univ.; Univ. de Estado do Rio de Janerio

    2009-01-01

    First order phase transitions for materials with exotic properties are usually believed to happen at fixed values of the intensive parameters (such as pressure, temperature, etc.) characterizing their properties. It is also considered that the extensive properties of the phases (such as entropy, volume, etc.) have discontinuities at the transition point, but that for each phase the intensive parameters remain constant during the transition. These features are a hallmark for systems described by two thermodynamic degrees of freedom. In this work it is shown that first order phase transitions must be understood in the broader framework of thermodynamic systems described by three or more degrees of freedom. This means that the transitions occur along intervals of the intensive parameters, that the properties of the phases coexisting during the transition may show peculiar behaviors characteristic of each system, and that a generalized Clausius-Clapeyron equation must be obeyed. These features for the magnetic case are confirmed, and it is shown that experimental calorimetric data agree well with the magnetic Clausius-Clapeyron equation for MnAs. An estimate for the point in the temperature-field plane where the first order magnetic transition turns to a second order one is obtained (the critical parameters) for MnAs and Gd{sub 5}Ge{sub 2}Si{sub 2} compounds. Anomalous behavior of the volumes of the coexisting phases during the magnetic first order transition is measured, and it is shown that the anomalies for the individual phases are hidden in the behavior of the global properties as the volume.

  4. Singular Perturbation for the Discounted Continuous Control of Piecewise Deterministic Markov Processes

    SciTech Connect

    Costa, O. L. V.; Dufour, F.

    2011-06-15

    This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP's) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space Double-Struck-Capital-R {sup n}. The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter {epsilon}>0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as {epsilon} goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as {epsilon} goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.

  5. Behavioural Effects of Tourism on Oceanic Common Dolphins, Delphinus sp., in New Zealand: The Effects of Markov Analysis Variations and Current Tour Operator Compliance with Regulations

    PubMed Central

    Meissner, Anna M.; Christiansen, Fredrik; Martinez, Emmanuelle; Pawley, Matthew D. M.; Orams, Mark B.; Stockin, Karen A.

    2015-01-01

    Common dolphins, Delphinus sp., are one of the marine mammal species tourism operations in New Zealand focus on. While effects of cetacean-watching activities have previously been examined in coastal regions in New Zealand, this study is the first to investigate effects of commercial tourism and recreational vessels on common dolphins in an open oceanic habitat. Observations from both an independent research vessel and aboard commercial tour vessels operating off the central and east coast Bay of Plenty, North Island, New Zealand were used to assess dolphin behaviour and record the level of compliance by permitted commercial tour operators and private recreational vessels with New Zealand regulations. Dolphin behaviour was assessed using two different approaches to Markov chain analysis in order to examine variation of responses of dolphins to vessels. Results showed that, regardless of the variance in Markov methods, dolphin foraging behaviour was significantly altered by boat interactions. Dolphins spent less time foraging during interactions and took significantly longer to return to foraging once disrupted by vessel presence. This research raises concerns about the potential disruption to feeding, a biologically critical behaviour. This may be particularly important in an open oceanic habitat, where prey resources are typically widely dispersed and unpredictable in abundance. Furthermore, because tourism in this region focuses on common dolphins transiting between adjacent coastal locations, the potential for cumulative effects could exacerbate the local effects demonstrated in this study. While the overall level of compliance by commercial operators was relatively high, non-compliance to the regulations was observed with time restriction, number or speed of vessels interacting with dolphins not being respected. Additionally, prohibited swimming with calves did occur. The effects shown in this study should be carefully considered within conservation management

  6. Behavioural effects of tourism on oceanic common dolphins, Delphinus sp., in New Zealand: the effects of Markov analysis variations and current tour operator compliance with regulations.

    PubMed

    Meissner, Anna M; Christiansen, Fredrik; Martinez, Emmanuelle; Pawley, Matthew D M; Orams, Mark B; Stockin, Karen A

    2015-01-01

    Common dolphins, Delphinus sp., are one of the marine mammal species tourism operations in New Zealand focus on. While effects of cetacean-watching activities have previously been examined in coastal regions in New Zealand, this study is the first to investigate effects of commercial tourism and recreational vessels on common dolphins in an open oceanic habitat. Observations from both an independent research vessel and aboard commercial tour vessels operating off the central and east coast Bay of Plenty, North Island, New Zealand were used to assess dolphin behaviour and record the level of compliance by permitted commercial tour operators and private recreational vessels with New Zealand regulations. Dolphin behaviour was assessed using two different approaches to Markov chain analysis in order to examine variation of responses of dolphins to vessels. Results showed that, regardless of the variance in Markov methods, dolphin foraging behaviour was significantly altered by boat interactions. Dolphins spent less time foraging during interactions and took significantly longer to return to foraging once disrupted by vessel presence. This research raises concerns about the potential disruption to feeding, a biologically critical behaviour. This may be particularly important in an open oceanic habitat, where prey resources are typically widely dispersed and unpredictable in abundance. Furthermore, because tourism in this region focuses on common dolphins transiting between adjacent coastal locations, the potential for cumulative effects could exacerbate the local effects demonstrated in this study. While the overall level of compliance by commercial operators was relatively high, non-compliance to the regulations was observed with time restriction, number or speed of vessels interacting with dolphins not being respected. Additionally, prohibited swimming with calves did occur. The effects shown in this study should be carefully considered within conservation management

  7. Maximally reliable Markov chains under energy constraints.

    PubMed

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  8. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    SciTech Connect

    Xu, Zuwei; Zhao, Haibo Zheng, Chuguang

    2015-01-15

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  9. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang

    2015-01-01

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance-rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  10. Multi-state Markov model for disability: A case of Malaysia Social Security (SOCSO)

    NASA Astrophysics Data System (ADS)

    Samsuddin, Shamshimah; Ismail, Noriszura

    2016-06-01

    Studies of SOCSO's contributor outcomes like disability are usually restricted to a single outcome. In this respect, the study has focused on the approach of multi-state Markov model for estimating the transition probabilities among SOCSO's contributor in Malaysia between states: work, temporary disability, permanent disability and death at yearly intervals on age, gender, year and disability category; ignoring duration and past disability experience which is not consider of how or when someone arrived in that category. These outcomes represent different states which depend on health status among the workers.

  11. Bayesian Modeling of Time Trends in Component Reliability Data via Markov Chain Monte Carlo Simulation

    SciTech Connect

    D. L. Kelly

    2007-06-01

    Markov chain Monte Carlo (MCMC) techniques represent an extremely flexible and powerful approach to Bayesian modeling. This work illustrates the application of such techniques to time-dependent reliability of components with repair. The WinBUGS package is used to illustrate, via examples, how Bayesian techniques can be used for parametric statistical modeling of time-dependent component reliability. Additionally, the crucial, but often overlooked subject of model validation is discussed, and summary statistics for judging the model’s ability to replicate the observed data are developed, based on the posterior predictive distribution for the parameters of interest.

  12. Multivariate Markov processes for stochastic systems with delays: application to the stochastic Gompertz model with delay.

    PubMed

    Frank, T D

    2002-07-01

    Using the method of steps, we describe stochastic processes with delays in terms of Markov diffusion processes. Thus, multivariate Langevin equations and Fokker-Planck equations are derived for stochastic delay differential equations. Natural, periodic, and reflective boundary conditions are discussed. Both Ito and Stratonovich calculus are used. In particular, our Fokker-Planck approach recovers the generalized delay Fokker-Planck equation proposed by Guillouzic et al. The results obtained are applied to a model for population growth: the Gompertz model with delay and multiplicative white noise.

  13. Low-energy theory for strained graphene: an approach up to second-order in the strain tensor

    NASA Astrophysics Data System (ADS)

    Oliva-Leyva, Maurice; Wang, Chumin

    2017-04-01

    An analytical study of low-energy electronic excited states in uniformly strained graphene is carried out up to second-order in the strain tensor. We report a new effective Dirac Hamiltonian with an anisotropic Fermi velocity tensor, which reveals the graphene trigonal symmetry being absent in first-order low-energy theories. In particular, we demonstrate the dependence of the Dirac-cone elliptical deformation on the stretching direction with respect to graphene lattice orientation. We further analytically calculate the optical conductivity tensor of strained graphene and its transmittance for a linearly polarized light with normal incidence. Finally, the obtained analytical expression of the Dirac point shift allows a better determination and understanding of pseudomagnetic fields induced by nonuniform strains.

  14. Low-energy theory for strained graphene: an approach up to second-order in the strain tensor.

    PubMed

    Oliva-Leyva, Maurice; Wang, Chumin

    2017-04-26

    An analytical study of low-energy electronic excited states in uniformly strained graphene is carried out up to second-order in the strain tensor. We report a new effective Dirac Hamiltonian with an anisotropic Fermi velocity tensor, which reveals the graphene trigonal symmetry being absent in first-order low-energy theories. In particular, we demonstrate the dependence of the Dirac-cone elliptical deformation on the stretching direction with respect to graphene lattice orientation. We further analytically calculate the optical conductivity tensor of strained graphene and its transmittance for a linearly polarized light with normal incidence. Finally, the obtained analytical expression of the Dirac point shift allows a better determination and understanding of pseudomagnetic fields induced by nonuniform strains.

  15. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  16. An optimization-based approach for solving a time-harmonic multiphysical wave problem with higher-order schemes

    NASA Astrophysics Data System (ADS)

    Mönkölä, Sanna

    2013-06-01

    This study considers developing numerical solution techniques for the computer simulations of time-harmonic fluid-structure interaction between acoustic and elastic waves. The focus is on the efficiency of an iterative solution method based on a controllability approach and spectral elements. We concentrate on the model, in which the acoustic waves in the fluid domain are modeled by using the velocity potential and the elastic waves in the structure domain are modeled by using displacement. Traditionally, the complex-valued time-harmonic equations are used for solving the time-harmonic problems. Instead of that, we focus on finding periodic solutions without solving the time-harmonic problems directly. The time-dependent equations can be simulated with respect to time until a time-harmonic solution is reached, but the approach suffers from poor convergence. To overcome this challenge, we follow the approach first suggested and developed for the acoustic wave equations by Bristeau, Glowinski, and Périaux. Thus, we accelerate the convergence rate by employing a controllability method. The problem is formulated as a least-squares optimization problem, which is solved with the conjugate gradient (CG) algorithm. Computation of the gradient of the functional is done directly for the discretized problem. A graph-based multigrid method is used for preconditioning the CG algorithm.

  17. A multi-order probabilistic approach for Instantaneous Angular Speed tracking debriefing of the CMMNO'14 diagnosis contest

    NASA Astrophysics Data System (ADS)

    Leclère, Quentin; André, Hugo; Antoni, Jérôme

    2016-12-01

    The aim of this work is to propose a novel approach for the estimation of the Instantaneous Angular Speed (IAS) of rotating machines from vibration measurements. This work is originated from the organisation, by the authors of this paper, of a contest during the conference CMMNO 2014, that was held in Lyon, December 2014. One purpose of the contest was to extract the IAS of a wind turbine from a gearbox accelerometer signal. The analysis of contestant contributions led to the observation that the main source of error in this exercise was the wrong association of one selected and tracked harmonic component with one mechanical periodic phenomenon, this association being assumed as an a priori hypothesis by all the methods used by the contestants. The approach proposed in this work does not need this kind of a priori assumption. A majority (but not necessarily all) periodical mechanical events are considered from a preliminary analysis of the kinematics of the machine (harmonics of shaft rotation speeds, meshing frequencies, etc.). The IAS is then determined from probability density functions that are constructed from instantaneous spectra of the signal. The efficiency and robustness of the proposed approach are illustrated in the frame of the CMMNO 2014 contest case.

  18. An in vivo and in silico approach to study cis-antisense: a short cut to higher order response

    NASA Astrophysics Data System (ADS)

    Courtney, Colleen; Varanasi, Usha; Chatterjee, Anushree

    2014-03-01

    Antisense interactions are present in all domains of life. Typically sense, antisense RNA pairs originate from overlapping genes with convergent face to face promoters, and are speculated to be involved in gene regulation. Recent studies indicate the role of transcriptional interference (TI) in regulating expression of genes in convergent orientation. Modeling antisense, TI gene regulation mechanisms allows us to understand how organisms control gene expression. We present a modeling and experimental framework to understand convergent transcription that combines the effects of transcriptional interference and cis-antisense regulation. Our model shows that combining transcriptional interference and antisense RNA interaction adds multiple-levels of regulation which affords a highly tunable biological output, ranging from first order response to complex higher-order response. To study this system we created a library of experimental constructs with engineered TI and antisense interaction by using face-to-face inducible promoters separated by carefully tailored overlapping DNA sequences to control expression of a set of fluorescent reporter proteins. Studying this gene expression mechanism allows for an understanding of higher order behavior of gene expression networks.

  19. Growth-induced polarity formation in solid solutions of organic molecules: Markov mean-field model and Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Wüst, Thomas; Hulliger, Jürg

    2005-02-01

    A layer-by-layer growth model is presented for the theoretical investigation of growth-induced polarity formation in solid solutions H1-XGX of polar (H) and nonpolar (G) molecules (X: molar fraction of G molecules in the solid, 0Markov mean-field description and Monte Carlo simulations. In solid solutions, polarity results from a combined effect of orientational selectivity by H and G molecules with respect to the alignment of the dipoles of H molecules and miscibility between the two components. Even though both native structures (H,G) may be centrosymmetric, polarity can arise just from the admixture of G molecules in the H crystal upon growth. An overview of possible phenomena is given by random selection of molecular interaction energies within an assumed but realistic energy range. The analytical approach describes sufficiently basic phenomena and is in good agreement with simulations. High probabilities for significant vectorial alignment of H molecules are found for low (X⩽0.2) and high (X⩾0.8) fractions of G molecules, respectively, as well as for ordered HG compounds (X=0.5).

  20. Multilayer Markov Random Field models for change detection in optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Benedek, Csaba; Shadaydeh, Maha; Kato, Zoltan; Szirányi, Tamás; Zerubia, Josiane

    2015-09-01

    In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of Ground Truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches.