Spectral Design in Markov Random Fields
NASA Astrophysics Data System (ADS)
Wang, Jiao; Thibault, Jean-Baptiste; Yu, Zhou; Sauer, Ken; Bouman, Charles
2011-03-01
Markov random fields (MRFs) have been shown to be a powerful and relatively compact stochastic model for imagery in the context of Bayesian estimation. The simplicity of their conventional embodiment implies local computation in iterative processes and relatively noncommittal statistical descriptions of image ensembles, resulting in stable estimators, particularly under models with strictly convex potential functions. This simplicity may be a liability, however, when the inherent bias of minimum mean-squared error or maximum a posteriori probability (MAP) estimators attenuate all but the lowest spatial frequencies. In this paper we explore generalization of MRFs by considering frequency-domain design of weighting coefficients which describe strengths of interconnections between clique members.
Unmixing hyperspectral images using Markov random fields
Eches, Olivier; Dobigeon, Nicolas; Tourneret, Jean-Yves
2011-03-14
This paper proposes a new spectral unmixing strategy based on the normal compositional model that exploits the spatial correlations between the image pixels. The pure materials (referred to as endmembers) contained in the image are assumed to be available (they can be obtained by using an appropriate endmember extraction algorithm), while the corresponding fractions (referred to as abundances) are estimated by the proposed algorithm. Due to physical constraints, the abundances have to satisfy positivity and sum-to-one constraints. The image is divided into homogeneous distinct regions having the same statistical properties for the abundance coefficients. The spatial dependencies within each class are modeled thanks to Potts-Markov random fields. Within a Bayesian framework, prior distributions for the abundances and the associated hyperparameters are introduced. A reparametrization of the abundance coefficients is proposed to handle the physical constraints (positivity and sum-to-one) inherent to hyperspectral imagery. The parameters (abundances), hyperparameters (abundance mean and variance for each class) and the classification map indicating the classes of all pixels in the image are inferred from the resulting joint posterior distribution. To overcome the complexity of the joint posterior distribution, Markov chain Monte Carlo methods are used to generate samples asymptotically distributed according to the joint posterior of interest. Simulations conducted on synthetic and real data are presented to illustrate the performance of the proposed algorithm.
Finite Markov Chains and Random Discrete Structures
1994-07-26
arrays with fixed margins 4. Persi Diaconis and Susan Holmes, Three Examples of Monte- Carlo Markov Chains: at the Interface between Statistical Computing...solutions for a math- ematical model of thermomechanical phase transitions in shape memory materials with Landau- Ginzburg free energy 1168 Angelo Favini
Multiscale Representations of Markov Random Fields
1992-09-08
modeling a wide variety of biological, chelmical, electrical, mechanical and economic phenomena, [10]. Moreover, the Markov structure makes the models...Transactions on Informlation Theory, 18:232-240, March 1972. [65] J. WOODS AND C. RADEWAN, "Kalman Filtering in Two Dimensions," IEEE Trans- actions on
Markov Random Fields, Stochastic Quantization and Image Analysis
1990-01-01
Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.
Measuring marine oil spill extent by Markov Random Fields
NASA Astrophysics Data System (ADS)
Moctezuma, Miguel; Parmiggiani, Flavio; Lopez Lopez, Ludwin
2014-10-01
The Deepwater Horizon oil spill of the Gulf of Mexico in the spring of 2010 was the largest accidental marine oil spill in the history of the petroleum industry. An immediate request, after the accident, was to detect the oil slick and to measure its extent: SAR images were the obvious tool to be employed for the task. This paper presents a processing scheme based on Markov Random Fields (MRF) theory. MRF theory describes the global information by probability terms involving local neighborhood representations of the SAR backscatter data. The random degradation introduced by speckle noise is dealt with a pre-processing stage which applies a nonlinear diffusion filter. Spatial context attributes are structured by the Bayes equation derived from a Maximum-A-Posteriori (MAP) estimation. The probability terms define an objective function of a MRF model whose goal is to detect contours and fine structures. The markovian segmentation problem is solved with a numerical optimization method. The scheme was applied to an Envisat/ASAR image over the Gulf of Mexico of May 9, 2010, when the oil spill was already fully developed. The final result was obtained with 51 recursion cycles, where, at each step, the segmentation consists of a 3-class label field (open sea and two oil slick thicknesses). Both the MRF model and the parameters of the stochastic optimization procedure will be provided, together with the area measurement of the two kinds of oil slick.
Learning Markov Random Walks for robust subspace clustering and estimation.
Liu, Risheng; Lin, Zhouchen; Su, Zhixun
2014-11-01
Markov Random Walks (MRW) has proven to be an effective way to understand spectral clustering and embedding. However, due to less global structural measure, conventional MRW (e.g., the Gaussian kernel MRW) cannot be applied to handle data points drawn from a mixture of subspaces. In this paper, we introduce a regularized MRW learning model, using a low-rank penalty to constrain the global subspace structure, for subspace clustering and estimation. In our framework, both the local pairwise similarity and the global subspace structure can be learnt from the transition probabilities of MRW. We prove that under some suitable conditions, our proposed local/global criteria can exactly capture the multiple subspace structure and learn a low-dimensional embedding for the data, in which giving the true segmentation of subspaces. To improve robustness in real situations, we also propose an extension of the MRW learning model based on integrating transition matrix learning and error correction in a unified framework. Experimental results on both synthetic data and real applications demonstrate that our proposed MRW learning model and its robust extension outperform the state-of-the-art subspace clustering methods.
Combinatorial Markov Random Fields and Their Applications to Information Organization
2008-02-01
data clustering —the most important application of unsupervised learning—for which we give some necessary definitions and insights. 2.1 Markov Random...algorithm starts with data instances distributed over k clusters (where k is the desired number of clusters ) and reorga- nizes / updates the clusters ...its original ICM- based version. 4.5 Related work The study of distributional clustering based on co-occurrence data using informa- tion theoretic
Fuzzy Markov random fields versus chains for multispectral image segmentation.
Salzenstein, Fabien; Collet, Christophe
2006-11-01
This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.
NASA Astrophysics Data System (ADS)
Sivakumar, Krishnamoorthy; Goutsias, John I.
1998-09-01
We study the problem of simulating a class of Gibbs random field models, called morphologically constrained Gibbs random fields, using Markov chain Monte Carlo sampling techniques. Traditional single site updating Markov chain Monte Carlo sampling algorithm, like the Metropolis algorithm, tend to converge extremely slowly when used to simulate these models, particularly at low temperatures and for constraints involving large geometrical shapes. Moreover, the morphologically constrained Gibbs random fields are not, in general, Markov. Hence, a Markov chain Monte Carlo sampling algorithm based on the Gibbs sampler is not possible. We prose a variant of the Metropolis algorithm that, at each iteration, allows multi-site updating and converges substantially faster than the traditional single- site updating algorithm. The set of sites that are updated at a particular iteration is specified in terms of a shape parameter and a size parameter. Computation of the acceptance probability involves a 'test ratio,' which requires computation of the ratio of the probabilities of the current and new realizations. Because of the special structure of our energy function, this computation can be done by means of a simple; local iterative procedure. Therefore lack of Markovianity does not impose any additional computational burden for model simulation. The proposed algorithm has been used to simulate a number of image texture models, both synthetic and natural.
Sub-Markov Random Walk for Image Segmentation.
Dong, Xingping; Shen, Jianbing; Shao, Ling; Van Gool, Luc
2016-02-01
A novel sub-Markov random walk (subRW) algorithm with label prior is proposed for seeded image segmentation, which can be interpreted as a traditional random walker on a graph with added auxiliary nodes. Under this explanation, we unify the proposed subRW and other popular random walk (RW) algorithms. This unifying view will make it possible for transferring intrinsic findings between different RW algorithms, and offer new ideas for designing novel RW algorithms by adding or changing auxiliary nodes. To verify the second benefit, we design a new subRW algorithm with label prior to solve the segmentation problem of objects with thin and elongated parts. The experimental results on both synthetic and natural images with twigs demonstrate that the proposed subRW method outperforms previous RW algorithms for seeded image segmentation.
A Markov random field approach for microstructure synthesis
NASA Astrophysics Data System (ADS)
Kumar, A.; Nguyen, L.; DeGraef, M.; Sundararaghavan, V.
2016-03-01
We test the notion that many microstructures have an underlying stationary probability distribution. The stationary probability distribution is ubiquitous: we know that different windows taken from a polycrystalline microstructure are generally ‘statistically similar’. To enable computation of such a probability distribution, microstructures are represented in the form of undirected probabilistic graphs called Markov Random Fields (MRFs). In the model, pixels take up integer or vector states and interact with multiple neighbors over a window. Using this lattice structure, algorithms are developed to sample the conditional probability density for the state of each pixel given the known states of its neighboring pixels. The sampling is performed using reference experimental images. 2D microstructures are artificially synthesized using the sampled probabilities. Statistical features such as grain size distribution and autocorrelation functions closely match with those of the experimental images. The mechanical properties of the synthesized microstructures were computed using the finite element method and were also found to match the experimental values.
Cover estimation and payload location using Markov random fields
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2014-02-01
Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.
MRFalign: protein homology detection through alignment of Markov random fields.
Ma, Jianzhu; Wang, Sheng; Wang, Zhiyong; Xu, Jinbo
2014-03-01
Sequence-based protein homology detection has been extensively studied and so far the most sensitive method is based upon comparison of protein sequence profiles, which are derived from multiple sequence alignment (MSA) of sequence homologs in a protein family. A sequence profile is usually represented as a position-specific scoring matrix (PSSM) or an HMM (Hidden Markov Model) and accordingly PSSM-PSSM or HMM-HMM comparison is used for homolog detection. This paper presents a new homology detection method MRFalign, consisting of three key components: 1) a Markov Random Fields (MRF) representation of a protein family; 2) a scoring function measuring similarity of two MRFs; and 3) an efficient ADMM (Alternating Direction Method of Multipliers) algorithm aligning two MRFs. Compared to HMM that can only model very short-range residue correlation, MRFs can model long-range residue interaction pattern and thus, encode information for the global 3D structure of a protein family. Consequently, MRF-MRF comparison for remote homology detection shall be much more sensitive than HMM-HMM or PSSM-PSSM comparison. Experiments confirm that MRFalign outperforms several popular HMM or PSSM-based methods in terms of both alignment accuracy and remote homology detection and that MRFalign works particularly well for mainly beta proteins. For example, tested on the benchmark SCOP40 (8353 proteins) for homology detection, PSSM-PSSM and HMM-HMM succeed on 48% and 52% of proteins, respectively, at superfamily level, and on 15% and 27% of proteins, respectively, at fold level. In contrast, MRFalign succeeds on 57.3% and 42.5% of proteins at superfamily and fold level, respectively. This study implies that long-range residue interaction patterns are very helpful for sequence-based homology detection. The software is available for download at http://raptorx.uchicago.edu/download/. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Glaucoma progression detection using nonlocal Markov random field prior.
Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Balasubramanian, Madhusudhanan; Weinreb, Robert N; Zangwill, Linda M
2014-10-01
Glaucoma is neurodegenerative disease characterized by distinctive changes in the optic nerve head and visual field. Without treatment, glaucoma can lead to permanent blindness. Therefore, monitoring glaucoma progression is important to detect uncontrolled disease and the possible need for therapy advancement. In this context, three-dimensional (3-D) spectral domain optical coherence tomography (SD-OCT) has been commonly used in the diagnosis and management of glaucoma patients. We present a new framework for detection of glaucoma progression using 3-D SD-OCT images. In contrast to previous works that use the retinal nerve fiber layer thickness measurement provided by commercially available instruments, we consider the whole 3-D volume for change detection. To account for the spatial voxel dependency, we propose the use of the Markov random field (MRF) model as a prior for the change detection map. In order to improve the robustness of the proposed approach, a nonlocal strategy was adopted to define the MRF energy function. To accommodate the presence of false-positive detection, we used a fuzzy logic approach to classify a 3-D SD-OCT image into a "non-progressing" or "progressing" glaucoma class. We compared the diagnostic performance of the proposed framework to the existing methods of progression detection.
Glaucoma progression detection using nonlocal Markov random field prior
Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A.; Balasubramanian, Madhusudhanan; Weinreb, Robert N.; Zangwill, Linda M.
2014-01-01
Abstract. Glaucoma is neurodegenerative disease characterized by distinctive changes in the optic nerve head and visual field. Without treatment, glaucoma can lead to permanent blindness. Therefore, monitoring glaucoma progression is important to detect uncontrolled disease and the possible need for therapy advancement. In this context, three-dimensional (3-D) spectral domain optical coherence tomography (SD-OCT) has been commonly used in the diagnosis and management of glaucoma patients. We present a new framework for detection of glaucoma progression using 3-D SD-OCT images. In contrast to previous works that use the retinal nerve fiber layer thickness measurement provided by commercially available instruments, we consider the whole 3-D volume for change detection. To account for the spatial voxel dependency, we propose the use of the Markov random field (MRF) model as a prior for the change detection map. In order to improve the robustness of the proposed approach, a nonlocal strategy was adopted to define the MRF energy function. To accommodate the presence of false-positive detection, we used a fuzzy logic approach to classify a 3-D SD-OCT image into a “non-progressing” or “progressing” glaucoma class. We compared the diagnostic performance of the proposed framework to the existing methods of progression detection. PMID:26158069
Brain tumor segmentation in 3D MRIs using an improved Markov random field model
NASA Astrophysics Data System (ADS)
Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza
2011-10-01
Markov Random Field (MRF) models have been recently suggested for MRI brain segmentation by a large number of researchers. By employing Markovianity, which represents the local property, MRF models are able to solve a global optimization problem locally. But they still have a heavy computation burden, especially when they use stochastic relaxation schemes such as Simulated Annealing (SA). In this paper, a new 3D-MRF model is put forward to raise the speed of the convergence. Although, search procedure of SA is fairly localized and prevents from exploring the same diversity of solutions, it suffers from several limitations. In comparison, Genetic Algorithm (GA) has a good capability of global researching but it is weak in hill climbing. Our proposed algorithm combines SA and an improved GA (IGA) to optimize the solution which speeds up the computation time. What is more, this proposed algorithm outperforms the traditional 2D-MRF in quality of the solution.
NASA Astrophysics Data System (ADS)
Senno, Gabriel; Bendersky, Ariel; Figueira, Santiago
2016-07-01
The concepts of randomness and non-locality are intimately intertwined outcomes of randomly chosen measurements over entangled systems exhibiting non-local correlations are, if we preclude instantaneous influence between distant measurement choices and outcomes, random. In this paper, we survey some recent advances in the knowledge of the interplay between these two important notions from a quantum information science perspective.
Comparing quantum versus Markov random walk models of judgements measured by rating scales
Wang, Z.; Busemeyer, J. R.
2016-01-01
Quantum and Markov random walk models are proposed for describing how people evaluate stimuli using rating scales. To empirically test these competing models, we conducted an experiment in which participants judged the effectiveness of public health service announcements from either their own personal perspective or from the perspective of another person. The order of the self versus other judgements was manipulated, which produced significant sequential effects. The quantum and Markov models were fitted to the data using the same number of parameters, and the model comparison strongly supported the quantum over the Markov model. PMID:26621984
Spatio-temporal contextual classification based on Markov random field model. [for thematic mapping
NASA Technical Reports Server (NTRS)
Jeon, Byeungwoo; Landgrebe, D. A.
1991-01-01
A contextural classifier based on a Markov random field model, which can utilize both spatial and temporal contexts, is investigated. Spatial and temporal neighbors are defined, and the class assignment of each pixel is assumed to be dependent only on the measurement vectors of itself and those of its spatial and temporal neighbors according to the Markov random field property. Only interpixel class dependency context is used in the classification. The joint prior probability of the classes of each pixel and its spatial and temporal neighbors are modeled by a Gibbs random field. The classification is performed in a recursive manner. Experiments with multi-temporal Thematic Mapper data show promising results.
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Entropy, complexity, and Markov diagrams for random walk cancer models
NASA Astrophysics Data System (ADS)
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
A Markov Chain Model for evaluating the effectiveness of randomized surveillance procedures
Edmunds, T.A.
1994-01-01
A Markov Chain Model has been developed to evaluate the effectiveness of randomized surveillance procedures. The model is applicable for surveillance systems that monitor a collection of assets by randomly selecting and inspecting the assets. The model provides an estimate of the detection probability as a function of the amount of time that an adversary would require to steal or sabotage the asset. An interactive computer code has been written to perform the necessary computations.
Wang, Hongyan; Zhou, Xiaobo
2013-04-01
By altering the electrostatic charge of histones or providing binding sites to protein recognition molecules, Chromatin marks have been proposed to regulate gene expression, a property that has motivated researchers to link these marks to cis-regulatory elements. With the help of next generation sequencing technologies, we can now correlate one specific chromatin mark with regulatory elements (e.g. enhancers or promoters) and also build tools, such as hidden Markov models, to gain insight into mark combinations. However, hidden Markov models have limitation for their character of generative models and assume that a current observation depends only on a current hidden state in the chain. Here, we employed two graphical probabilistic models, namely the linear conditional random field model and multivariate hidden Markov model, to mark gene regions with different states based on recurrent and spatially coherent character of these eight marks. Both models revealed chromatin states that may correspond to enhancers and promoters, transcribed regions, transcriptional elongation, and low-signal regions. We also found that the linear conditional random field model was more effective than the hidden Markov model in recognizing regulatory elements, such as promoter-, enhancer-, and transcriptional elongation-associated regions, which gives us a better choice.
Zhang, J
1996-01-01
The Gibbs-Bogoliubov-Feynman (GBF) inequality of statistical mechanics is adopted, with an information-theoretic interpretation, as a general optimization framework for deriving and examining various mean field approximations for Markov random fields (MRF's). The efficacy of this approach is demonstrated through the compound Gauss-Markov (CGM) model, comparisons between different mean field approximations, and experimental results in image restoration.
Ge, Mei; Mainprize, James G.; Mawdsley, Gordon E.; Yaffe, Martin J.
2014-01-01
Abstract. Accurate and automatic segmentation of the pectoralis muscle is essential in many breast image processing procedures, for example, in the computation of volumetric breast density from digital mammograms. Its segmentation is a difficult task due to the heterogeneity of the region, neighborhood complexities, and shape variability. The segmentation is achieved by pixel classification through a Markov random field (MRF) image model. Using the image intensity feature as observable data and local spatial information as a priori, the posterior distribution is estimated in a stochastic process. With a variable potential component in the energy function, by the maximum a posteriori (MAP) estimate of the labeling image, given the image intensity feature which is assumed to follow a Gaussian distribution, we achieved convergence properties in an appropriate sense by Metropolis sampling the posterior distribution of the selected energy function. By proposing an adjustable spatial constraint, the MRF-MAP model is able to embody the shape requirement and provide the required flexibility for the model parameter fitting process. We demonstrate that accurate and robust segmentation can be achieved for the curving-triangle-shaped pectoralis muscle in the medio-lateral-oblique (MLO) view, and the semielliptic-shaped muscle in cranio-caudal (CC) view digital mammograms. The applicable mammograms can be either “For Processing” or “For Presentation” image formats. The algorithm was developed using 56 MLO-view and 79 CC-view FFDM “For Processing” images, and quantitatively evaluated against a random selection of 122 MLO-view and 173 CC-view FFDM images of both presentation intent types. PMID:26158068
Theory of Distribution Estimation of Hyperparameters in Markov Random Field Models
NASA Astrophysics Data System (ADS)
Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato
2016-06-01
We investigated the performance of distribution estimation of hyperparameters in Markov random field models proposed by Nakanishi-Ohno et al., http://doi.org/10.1088/1751-8113/47/4/045001, J. Phys. A 47, 045001 (2014) when used to evaluate the confidence of data. We analytically calculated the configurational average, with respect to data, of the negative logarithm of the posterior distribution, which is called free energy based on an analogy with statistical mechanics. This configurational average of free energy shrinks as the amount of data increases. Our results theoretically confirm the numerical results from that previous study.
Entropy and long-range memory in random symbolic additive Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2016-06-01
The goal of this paper is to develop an estimate for the entropy of random symbolic sequences with elements belonging to a finite alphabet. As a plausible model, we use the high-order additive stationary ergodic Markov chain with long-range memory. Supposing that the correlations between random elements of the chain are weak, we express the conditional entropy of the sequence by means of the symbolic pair correlation function. We also examine an algorithm for estimating the conditional entropy of finite symbolic sequences. We show that the entropy contains two contributions, i.e., the correlation and the fluctuation. The obtained analytical results are used for numerical evaluation of the entropy of written English texts and DNA nucleotide sequences. The developed theory opens the way for constructing a more consistent and sophisticated approach to describe the systems with strong short-range and weak long-range memory.
Multilayer Markov Random Field models for change detection in optical remote sensing images
NASA Astrophysics Data System (ADS)
Benedek, Csaba; Shadaydeh, Maha; Kato, Zoltan; Szirányi, Tamás; Zerubia, Josiane
2015-09-01
In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of Ground Truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches.
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Monaco, James P; Madabhushi, Anant
2012-12-01
Many estimation tasks require Bayesian classifiers capable of adjusting their performance (e.g. sensitivity/specificity). In situations where the optimal classification decision can be identified by an exhaustive search over all possible classes, means for adjusting classifier performance, such as probability thresholding or weighting the a posteriori probabilities, are well established. Unfortunately, analogous methods compatible with Markov random fields (i.e. large collections of dependent random variables) are noticeably absent from the literature. Consequently, most Markov random field (MRF) based classification systems typically restrict their performance to a single, static operating point (i.e. a paired sensitivity/specificity). To address this deficiency, we previously introduced an extension of maximum posterior marginals (MPM) estimation that allows certain classes to be weighted more heavily than others, thus providing a means for varying classifier performance. However, this extension is not appropriate for the more popular maximum a posteriori (MAP) estimation. Thus, a strategy for varying the performance of MAP estimators is still needed. Such a strategy is essential for several reasons: (1) the MAP cost function may be more appropriate in certain classification tasks than the MPM cost function, (2) the literature provides a surfeit of MAP estimation implementations, several of which are considerably faster than the typical Markov Chain Monte Carlo methods used for MPM, and (3) MAP estimation is used far more often than MPM. Consequently, in this paper we introduce multiplicative weighted MAP (MWMAP) estimation-achieved via the incorporation of multiplicative weights into the MAP cost function-which allows certain classes to be preferred over others. This creates a natural bias for specific classes, and consequently a means for adjusting classifier performance. Similarly, we show how this multiplicative weighting strategy can be applied to the MPM
Multi-fidelity modelling via recursive co-kriging and Gaussian-Markov random fields.
Perdikaris, P; Venturi, D; Royset, J O; Karniadakis, G E
2015-07-08
We propose a new framework for design under uncertainty based on stochastic computer simulations and multi-level recursive co-kriging. The proposed methodology simultaneously takes into account multi-fidelity in models, such as direct numerical simulations versus empirical formulae, as well as multi-fidelity in the probability space (e.g. sparse grids versus tensor product multi-element probabilistic collocation). We are able to construct response surfaces of complex dynamical systems by blending multiple information sources via auto-regressive stochastic modelling. A computationally efficient machine learning framework is developed based on multi-level recursive co-kriging with sparse precision matrices of Gaussian-Markov random fields. The effectiveness of the new algorithms is demonstrated in numerical examples involving a prototype problem in risk-averse design, regression of random functions, as well as uncertainty quantification in fluid mechanics involving the evolution of a Burgers equation from a random initial state, and random laminar wakes behind circular cylinders.
Multi-fidelity modelling via recursive co-kriging and Gaussian–Markov random fields
Perdikaris, P.; Venturi, D.; Royset, J. O.; Karniadakis, G. E.
2015-01-01
We propose a new framework for design under uncertainty based on stochastic computer simulations and multi-level recursive co-kriging. The proposed methodology simultaneously takes into account multi-fidelity in models, such as direct numerical simulations versus empirical formulae, as well as multi-fidelity in the probability space (e.g. sparse grids versus tensor product multi-element probabilistic collocation). We are able to construct response surfaces of complex dynamical systems by blending multiple information sources via auto-regressive stochastic modelling. A computationally efficient machine learning framework is developed based on multi-level recursive co-kriging with sparse precision matrices of Gaussian–Markov random fields. The effectiveness of the new algorithms is demonstrated in numerical examples involving a prototype problem in risk-averse design, regression of random functions, as well as uncertainty quantification in fluid mechanics involving the evolution of a Burgers equation from a random initial state, and random laminar wakes behind circular cylinders. PMID:26345079
Scene estimation from speckled synthetic aperture radar imagery: Markov-random-field approach.
Lankoande, Ousseini; Hayat, Majeed M; Santhanam, Balu
2006-06-01
A novel Markov-random-field model for speckled synthetic aperture radar (SAR) imagery is derived according to the physical, spatial statistical properties of speckle noise in coherent imaging. A convex Gibbs energy function for speckled images is derived and utilized to perform speckle-compensating image estimation. The image estimation is formed by computing the conditional expectation of the noisy image at each pixel given its neighbors, which is further expressed in terms of the derived Gibbs energy function. The efficacy of the proposed technique, in terms of reducing speckle noise while preserving spatial resolution, is studied by using both real and simulated SAR imagery. Using a number of commonly used metrics, the performance of the proposed technique is shown to surpass that of existing speckle-noise-filtering methods such as the Gamma MAP, the modified Lee, and the enhanced Frost.
Markov random field model for segmenting large populations of lipid vesicles from micrographs.
Zupanc, Jernej; Drobne, Damjana; Ster, Branko
2011-12-01
Giant unilamellar lipid vesicles, artificial replacements for cell membranes, are a promising tool for in vitro assessment of interactions between products of nanotechnologies and biological membranes. However, the effect of nanoparticles can not be derived from observations on a single specimen, vesicle populations should be observed instead. We propose an adaptation of the Markov random field image segmentation model which allows detection and segmentation of numerous vesicles in micrographs. The reliability of this model with different lighting, blur, and noise characteristics of micrographs is examined and discussed. Moreover, the automatic segmentation is tested on micrographs with thousands of vesicles and the result is compared to that of manual segmentation. The segmentation step presented is part of a methodology we are developing for bio-nano interaction assessment studies on lipid vesicles.
A Markov random field approach for modeling spatio-temporal evolution of microstructures
NASA Astrophysics Data System (ADS)
Acar, Pinar; Sundararaghavan, Veera
2016-10-01
The following problem is addressed: ‘Can one synthesize microstructure evolution over a large area given experimental movies measured over smaller regions?’ Our input is a movie of microstructure evolution over a small sample window. A Markov random field (MRF) algorithm is developed that uses this data to estimate the evolution of microstructure over a larger region. Unlike the standard microstructure reconstruction problem based on stationary images, the present algorithm is also able to reconstruct time-evolving phenomena such as grain growth. Such an algorithm would decrease the cost of full-scale microstructure measurements by coupling mathematical estimation with targeted small-scale spatiotemporal measurements. The grain size, shape and orientation distribution statistics of synthesized polycrystalline microstructures at different times are compared with the original movie to verify the method.
Mixture model and Markov random field-based remote sensing image unsupervised clustering method
NASA Astrophysics Data System (ADS)
Hou, Y.; Yang, Y.; Rao, N.; Lun, X.; Lan, J.
2011-03-01
In this paper, a novel method for remote sensing image clustering based on mixture model and Markov random field (MRF) is proposed. A remote sensing image can be considered as Gaussian mixture model. The image clustering result corresponding to the image label field is a MRF. So, the image clustering procedure is transformed to a maximum a posterior (MAP) problem by Bayesian theorem. The intensity difference and the spatial distance between the two pixels in the same clique are introduced into the traditional MRF potential function. The iterative conditional model (ICM) is employed to find the solution of MAP. We use the max entropy criterion to choose the optimal clustering number. In the experiments, the method is compared with the traditional MRF clustering method using ICM and simulated annealing (SA). The results show that this method is better than the traditional MRF model both in noise filtering and miss-classification ratio.
Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET
NASA Astrophysics Data System (ADS)
Bousse, Alexandre; Pedemonte, Stefano; Thomas, Benjamin A.; Erlandsson, Kjell; Ourselin, Sébastien; Arridge, Simon; Hutton, Brian F.
2012-10-01
In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image.
3D Mesh Segmentation Based on Markov Random Fields and Graph Cuts
NASA Astrophysics Data System (ADS)
Shi, Zhenfeng; Le, Dan; Yu, Liyang; Niu, Xiamu
3D Mesh segmentation has become an important research field in computer graphics during the past few decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. However, only a few algorithms based on Markov Random Field (MRF) has been presented for 3D object segmentation. In this letter, we present a definition of mesh segmentation according to the labeling problem. Inspired by the capability of MRF combining the geometric information and the topology information of a 3D mesh, we propose a novel 3D mesh segmentation model based on MRF and Graph Cuts. Experimental results show that our MRF-based schema achieves an effective segmentation.
Segmentation of angiodysplasia lesions in WCE images using a MAP approach with Markov Random Fields.
Vieira, Pedro M; Goncalves, Bruno; Goncalves, Carla R; Lima, Carlos S
2016-08-01
This paper deals with the segmentation of angiodysplasias in wireless capsule endoscopy images. These lesions are the cause of almost 10% of all gastrointestinal bleeding episodes, and its detection using the available software presents low sensitivity. This work proposes an automatic selection of a ROI using an image segmentation module based on the MAP approach where an accelerated version of the EM algorithm is used to iteratively estimate the model parameters. Spatial context is modeled in the prior probability density function using Markov Random Fields. The color space used was CIELab, specially the a component, which highlighted most these type of lesions. The proposed method is the first regarding this specific type of lesions, but when compared to other state-of-the-art segmentation methods, it almost doubles the results.
NASA Astrophysics Data System (ADS)
Emoto, K.; Sato, H.; Nishimura, T.
2010-12-01
Short-period seismograms provide rich information of small-scale heterogeneities in the earth. However, such seismograms are too complex due to random velocity inhomogeneities to use deterministic methods for the wave form synthesis. We can use stochastic methods for the synthesis of wave envelopes instead. The Markov approximation, which is a stochastic extension of the phase screen method, is a powerful tool for the synthesis of wave envelopes in random media when the wavelength is shorter than the correlation distance of the inhomogeneity. Recently, Saito et al. (2008) synthesized the envelopes in layered random media with a constant background velocity, and Emoto et al. (2010) calculated envelopes on the free surface of random media. Considering more realistic cases, we synthesize vector wave envelopes on the free surface of 2-D layered random media with background velocity discontinuities for the vertical incidence of a plane wavelet. In the Markov approximation, we define the two frequency mutual coherence function (TFMCF) of the potential on the transverse plane which is perpendicular to the global propagation direction. The TFMCF satisfies the parabolic type wave equation when backscattering is negligible. We use the angular spectrum, which is the TFMCF in the wavenumber domain, represents the ray angle distribution of scattered wave’s power. First, we solve the parabolic wave equation in the bottom layer and calculate the angular spectrum at the layer boundary. We multiply the angular spectrum by the transmission or conversion coefficient at the velocity discontinuity, where scattered waves are treated as a superposition of plane waves just beneath the boundary. We note that PS conversion occurs at the velocity boundary. Then, taking the inverse Fourier transform to the space domain (modified TFMCF), we solve the parabolic wave equation in the upper layer where the modified TFMCF calculated before is used as the initial condition. We repeat this
Statistical bubble localization with random interactions
NASA Astrophysics Data System (ADS)
Li, Xiaopeng; Deng, Dong-Ling; Wu, Yang-Le; Das Sarma, S.
2017-01-01
We study one-dimensional spinless fermions with random interactions, but without any on-site disorder. We find that random interactions generically stabilize a many-body localized phase, in spite of the completely extended single-particle degrees of freedom. In the large randomness limit, we construct "bubble-neck" eigenstates having a universal area-law entanglement entropy on average, with the number of volume-law states being exponentially suppressed. We argue that this statistical localization is beyond the phenomenological local-integrals-of-motion description of many-body localization. With exact diagonalization, we confirm the robustness of the many-body localized phase at finite randomness by investigating eigenstate properties such as level statistics, entanglement/participation entropies, and nonergodic quantum dynamics. At weak random interactions, the system develops a thermalization transition when the single-particle hopping becomes dominant.
Human fixation detection model in video compressed domain based on Markov random field
NASA Astrophysics Data System (ADS)
Li, Yongjun; Li, Yunsong; Liu, Weijia; Hu, Jing; Ge, Chiru
2017-01-01
Recently, research on and applications of human fixation detection in video compressed domain have gained increasing attention. However, prediction accuracy and computational complexity still remain a challenge. This paper addresses the problem of compressed domain fixations detection in the videos based on residual discrete cosine transform coefficients norm (RDCN) and Markov random field (MRF). RDCN feature is directly extracted from the compressed video with partial decoding and is normalized. After spatial-temporal filtering, the normalized map [Smoothed RDCN (SRDCN) map] is taken to the MRF model, and the optimal binary label map is obtained. Based on the label map and the center saliency map, saliency enhancement and nonsaliency inhibition are done for the SRDCN map, and the final SRDCN-MRF salient map is obtained. Compared with the similar models, we enhance the available energy functions and introduce an energy function that indicates the positional information of the saliency. The procedure is advantageous for improving prediction accuracy and reducing computational complexity. The validation and comparison are made by several accuracy metrics on two ground truth datasets. Experimental results show that the proposed saliency detection model achieves superior performances over several state-of-the-art compressed-domain and pixel-domain algorithms on evaluation metrics. Computationally, our algorithm reduces 26% more computational complexity with comparison to similar algorithms.
Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.
Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping
2014-01-01
The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.
A new method for direction finding based on Markov random field model
NASA Astrophysics Data System (ADS)
Ota, Mamoru; Kasahara, Yoshiya; Goto, Yoshitaka
2015-07-01
Investigating the characteristics of plasma waves observed by scientific satellites in the Earth's plasmasphere/magnetosphere is effective for understanding the mechanisms for generating waves and the plasma environment that influences wave generation and propagation. In particular, finding the propagation directions of waves is important for understanding mechanisms of VLF/ELF waves. To find these directions, the wave distribution function (WDF) method has been proposed. This method is based on the idea that observed signals consist of a number of elementary plane waves that define wave energy density distribution. However, the resulting equations constitute an ill-posed problem in which a solution is not determined uniquely; hence, an adequate model must be assumed for a solution. Although many models have been proposed, we have to select the most optimum model for the given situation because each model has its own advantages and disadvantages. In the present study, we propose a new method for direction finding of the plasma waves measured by plasma wave receivers. Our method is based on the assumption that the WDF can be represented by a Markov random field model with inference of model parameters performed using a variational Bayesian learning algorithm. Using computer-generated spectral matrices, we evaluated the performance of the model and compared the results with those obtained from two conventional methods.
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Haynor, David R.; Thompson, Carol L.; Lein, Ed; Hawrylycz, Michael
2009-02-01
Understanding the geography of genetic expression in the mouse brain has opened previously unexplored avenues in neuroinformatics. The Allen Brain Atlas (www.brain-map.org) (ABA) provides genome-wide colorimetric in situ hybridization (ISH) gene expression images at high spatial resolution, all mapped to a common three-dimensional 200μm3 spatial framework defined by the Allen Reference Atlas (ARA) and is a unique data set for studying expression based structural and functional organization of the brain. The goal of this study was to facilitate an unbiased data-driven structural partitioning of the major structures in the mouse brain. We have developed an algorithm that uses nonnegative matrix factorization (NMF) to perform parts based analysis of ISH gene expression images. The standard NMF approach and its variants are limited in their ability to flexibly integrate prior knowledge, in the context of spatial data. In this paper, we introduce spatial connectivity as an additional regularization in NMF decomposition via the use of Markov Random Fields (mNMF). The mNMF algorithm alternates neighborhood updates with iterations of the standard NMF algorithm to exploit spatial correlations in the data. We present the algorithm and show the sub-divisions of hippocampus and somatosensory-cortex obtained via this approach. The results are compared with established neuroanatomic knowledge. We also highlight novel gene expression based sub divisions of the hippocampus identified by using the mNMF algorithm.
Enhancing gene regulatory network inference through data integration with markov random fields.
Banf, Michael; Rhee, Seung Y
2017-02-01
A gene regulatory network links transcription factors to their target genes and represents a map of transcriptional regulation. Much progress has been made in deciphering gene regulatory networks computationally. However, gene regulatory network inference for most eukaryotic organisms remain challenging. To improve the accuracy of gene regulatory network inference and facilitate candidate selection for experimentation, we developed an algorithm called GRACE (Gene Regulatory network inference ACcuracy Enhancement). GRACE exploits biological a priori and heterogeneous data integration to generate high- confidence network predictions for eukaryotic organisms using Markov Random Fields in a semi-supervised fashion. GRACE uses a novel optimization scheme to integrate regulatory evidence and biological relevance. It is particularly suited for model learning with sparse regulatory gold standard data. We show GRACE's potential to produce high confidence regulatory networks compared to state of the art approaches using Drosophila melanogaster and Arabidopsis thaliana data. In an A. thaliana developmental gene regulatory network, GRACE recovers cell cycle related regulatory mechanisms and further hypothesizes several novel regulatory links, including a putative control mechanism of vascular structure formation due to modifications in cell proliferation.
Surface roughness extraction based on Markov random field model in wavelet feature domain
NASA Astrophysics Data System (ADS)
Yang, Lei; Lei, Li-qiao
2014-12-01
Based on the computer texture analysis method, a new noncontact surface roughness measurement technique is proposed. The method is inspired by the nonredundant directional selectivity and highly discriminative nature of the wavelet representation and the capability of the Markov random field (MRF) model to capture statistical regularities. Surface roughness information contained in the texture features may be extracted based on an MRF stochastic model of textures in the wavelet feature domain. The model captures significant intrascale and interscale statistical dependencies between wavelet coefficients. To investigate the relationship between the texture features and surface roughness Ra, a simple research setup, which consists of a charge-coupled diode camera without a lens and a diode laser, was established, and the laser speckle texture patterns are acquired from some standard grinding surfaces. The research results have illustrated that surface roughness Ra has a good monotonic relationship with the texture features of the laser speckle pattern. If this measuring system is calibrated with the surface standard samples roughness beforehand, the surface roughness actual value Ra can be deduced in the case of the same material surfaces ground at the same manufacture conditions.
Geiger, D.; Girosi, F.
1989-05-01
In recent years many researchers have investigated the use of Markov random fields (MRFs) for computer vision. They can be applied for example in the output of the visual processes to reconstruct surfaces from sparse and noisy depth data, or to integrate early vision processes to label physical discontinuities. Drawbacks of MRFs models have been the computational complexity of the implementation and the difficulty in estimating the parameters of the model. This paper derives deterministic approximations to MRFs models. One of the considered models is shown to give in a natural way the graduate non convexity (GNC) algorithm. This model can be applied to smooth a field preserving its discontinuities. A new model is then proposed: it allows the gradient of the field to be enhanced at the discontinuities and smoothed elsewhere. All the theoretical results are obtained in the framework of the mean field theory, that is a well known statistical mechanics technique. A fast, parallel, and iterative algorithm to solve the deterministic equations of the two models is presented, together with experiments on synthetic and real images. The algorithm is applied to the problem of surface reconstruction is in the case of sparse data. A fast algorithm is also described that solves the problem of aligning the discontinuities of different visual models with intensity edges via integration.
Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model
Guo, Yu; Feng, Yuanming; Sun, Jian; Lin, Wang; Sa, Yu; Wang, Ping
2014-01-01
The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum. PMID:24987451
Zeng, Jia; Liu, Zhi-Qiang
2008-05-01
This paper proposes a statistical-structural character modeling method based on Markov random fields (MRFs) for handwritten Chinese character recognition (HCCR). The stroke relationships of a Chinese character reflect its structure, which can be statistically represented by the neighborhood system and clique potentials within the MRF framework. Based on the prior knowledge of character structures, we design the neighborhood system that accounts for the most important stroke relationships. We penalize the structurally mismatched stroke relationships with MRFs using the prior clique potentials, and derive the likelihood clique potentials from Gaussian mixture models, which encode the large variations of stroke relationships statistically. In the proposed HCCR system, we use the single-site likelihood clique potentials to extract many candidate strokes from character images, and use the pairsite clique potentials to determine the best structural match between the input candidate strokes and the MRF-based character models by relaxation labeling. The experiments on the KAIST character database demonstrate that MRFs can statistically model character structures, and work well in the HCCR system.
Enhancing gene regulatory network inference through data integration with markov random fields
Banf, Michael; Rhee, Seung Y.
2017-01-01
A gene regulatory network links transcription factors to their target genes and represents a map of transcriptional regulation. Much progress has been made in deciphering gene regulatory networks computationally. However, gene regulatory network inference for most eukaryotic organisms remain challenging. To improve the accuracy of gene regulatory network inference and facilitate candidate selection for experimentation, we developed an algorithm called GRACE (Gene Regulatory network inference ACcuracy Enhancement). GRACE exploits biological a priori and heterogeneous data integration to generate high- confidence network predictions for eukaryotic organisms using Markov Random Fields in a semi-supervised fashion. GRACE uses a novel optimization scheme to integrate regulatory evidence and biological relevance. It is particularly suited for model learning with sparse regulatory gold standard data. We show GRACE’s potential to produce high confidence regulatory networks compared to state of the art approaches using Drosophila melanogaster and Arabidopsis thaliana data. In an A. thaliana developmental gene regulatory network, GRACE recovers cell cycle related regulatory mechanisms and further hypothesizes several novel regulatory links, including a putative control mechanism of vascular structure formation due to modifications in cell proliferation. PMID:28145456
Medina, Rubén; Garreau, Mireille; Toro, Javier; Le Breton, Hervé; Coatrieux, Jean-Louis; Jugo, Diego
2006-01-01
This paper reports on a method for left ventricle three-dimensional reconstruction from two orthogonal ventriculograms. The proposed algorithm is voxel-based and takes into account the conical projection geometry associated with the biplane image acquisition equipment. The reconstruction process starts with an initial ellipsoidal approximation derived from the input ventriculograms. This model is subsequently deformed in such a way as to match the input projections. To this end, the object is modeled as a three-dimensional Markov-Gibbs random field, and an energy function is defined so that it includes one term that models the projections compatibility and another one that includes the space–time regularity constraints. The performance of this reconstruction method is evaluated by considering the reconstruction of mathematically synthesized phantoms and two 3-D binary databases from two orthogonal synthesized projections. The method is also tested using real biplane ventriculograms. In this case, the performance of the reconstruction is expressed in terms of the projection error, which attains values between 9.50% and 11.78 % for two biplane sequences including a total of 55 images. PMID:16895001
Enhancing gene regulatory network inference through data integration with markov random fields
Banf, Michael; Rhee, Seung Y.
2017-02-01
Here, a gene regulatory network links transcription factors to their target genes and represents a map of transcriptional regulation. Much progress has been made in deciphering gene regulatory networks computationally. However, gene regulatory network inference for most eukaryotic organisms remain challenging. To improve the accuracy of gene regulatory network inference and facilitate candidate selection for experimentation, we developed an algorithm called GRACE (Gene Regulatory network inference ACcuracy Enhancement). GRACE exploits biological a priori and heterogeneous data integration to generate high- confidence network predictions for eukaryotic organisms using Markov Random Fields in a semi-supervised fashion. GRACE uses a novel optimization schememore » to integrate regulatory evidence and biological relevance. It is particularly suited for model learning with sparse regulatory gold standard data. We show GRACE’s potential to produce high confidence regulatory networks compared to state of the art approaches using Drosophila melanogaster and Arabidopsis thaliana data. In an A. thaliana developmental gene regulatory network, GRACE recovers cell cycle related regulatory mechanisms and further hypothesizes several novel regulatory links, including a putative control mechanism of vascular structure formation due to modifications in cell proliferation.« less
Szeliski, Richard; Zabih, Ramin; Scharstein, Daniel; Veksler, Olga; Kolmogorov, Vladimir; Agarwala, Aseem; Tappen, Marshall; Rother, Carsten
2008-06-01
Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http://vision.middlebury.edu/MRF/.
Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model
Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong
2014-01-01
Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005
Impact of Markov Random Field Optimizer on MRI-based Tissue Segmentation in the Aging Brain
Schwarz, Christopher G.; Tsui, Alex; Fletcher, Evan; Singh, Baljeet; DeCarli, Charles; Carmichael, Owen
2013-01-01
Automatically segmenting brain magnetic resonance images into grey matter, white matter, and cerebrospinal fluid compartments is a fundamentally important neuroimaging problem whose difficulty is heightened in the presence of aging and neurodegenerative disease. Current methods overlap greatly in terms of identifiable algorithmic components, and the impact of specific components on performance is generally unclear in important real-world scenarios involving serial scanning, multiple scanners, and neurodegenerative disease. Therefore we evaluated the impact that one such component, the Markov Random Field (MRF) optimizer that encourages spatially-smooth tissue labelings, has on brain tissue segmentation performance. Two challenging elderly sets were used to test segmentation consistency across scanners and biological plausibility of tissue change estimates; and a simulated young brain data set was used to test accuracy against ground truth. Comparisons among Graph Cuts (GC), Belief Propagation (BP), and Iterative Conditional Modes (ICM) suggested that in the elderly brain, BP and GC provide the highest segmentation performance, with a slight advantage to BP, and that performance is often superior to that provided by popular methods SPM and FAST. Conversely, SPM and FAST excelled in the young brain, thus emphasizing the unique challenges involved in imaging the aging brain. PMID:22256150
Video object tracking in the compressed domain using spatio-temporal Markov random fields.
Khatoonabadi, Sayed Hossein; Bajić, Ivan V
2013-01-01
Despite the recent progress in both pixel-domain and compressed-domain video object tracking, the need for a tracking framework with both reasonable accuracy and reasonable complexity still exists. This paper presents a method for tracking moving objects in H.264/AVC-compressed video sequences using a spatio-temporal Markov random field (ST-MRF) model. An ST-MRF model naturally integrates the spatial and temporal aspects of the object's motion. Built upon such a model, the proposed method works in the compressed domain and uses only the motion vectors (MVs) and block coding modes from the compressed bitstream to perform tracking. First, the MVs are preprocessed through intracoded block motion approximation and global motion compensation. At each frame, the decision of whether a particular block belongs to the object being tracked is made with the help of the ST-MRF model, which is updated from frame to frame in order to follow the changes in the object's motion. The proposed method is tested on a number of standard sequences, and the results demonstrate its advantages over some of the recent state-of-the-art methods.
Khan, Mohammad Ibrahim; Kamal, Md Sarwar
2015-03-01
Markov Chain is very effective in prediction basically in long data set. In DNA sequencing it is always very important to find the existence of certain nucleotides based on the previous history of the data set. We imposed the Chapman Kolmogorov equation to accomplish the task of Markov Chain. Chapman Kolmogorov equation is the key to help the address the proper places of the DNA chain and this is very powerful tools in mathematics as well as in any other prediction based research. It incorporates the score of DNA sequences calculated by various techniques. Our research utilize the fundamentals of Warshall Algorithm (WA) and Dynamic Programming (DP) to measures the score of DNA segments. The outcomes of the experiment are that Warshall Algorithm is good for small DNA sequences on the other hand Dynamic Programming are good for long DNA sequences. On the top of above findings, it is very important to measure the risk factors of local sequencing during the matching of local sequence alignments whatever the length.
Bayesian inference of local trees along chromosomes by the sequential Markov coalescent.
Zheng, Chaozhi; Kuhner, Mary K; Thompson, Elizabeth A
2014-05-01
We propose a genealogy-sampling algorithm, Sequential Markov Ancestral Recombination Tree (SMARTree), that provides an approach to estimation from SNP haplotype data of the patterns of coancestry across a genome segment among a set of homologous chromosomes. To enable analysis across longer segments of genome, the sequence of coalescent trees is modeled via the modified sequential Markov coalescent (Marjoram and Wall, Genetics 7:16, 2006). To assess performance in estimating these local trees, our SMARTree implementation is tested on simulated data. Our base data set is of the SNPs in 10 DNA sequences over 50 kb. We examine the effects of longer sequences and of more sequences, and of a recombination and/or mutational hotspot. The model underlying SMARTree is an approximation to the full recombinant-coalescent distribution. However, in a small trial on simulated data, recovery of local trees was similar to that of LAMARC (Kuhner et al. Genetics 156:1393-1401, 2000a), a sampler which uses the full model.
Analysis and Validation of Grid dem Generation Based on Gaussian Markov Random Field
NASA Astrophysics Data System (ADS)
Aguilar, F. J.; Aguilar, M. A.; Blanco, J. L.; Nemmaoui, A.; García Lorca, A. M.
2016-06-01
Digital Elevation Models (DEMs) are considered as one of the most relevant geospatial data to carry out land-cover and land-use classification. This work deals with the application of a mathematical framework based on a Gaussian Markov Random Field (GMRF) to interpolate grid DEMs from scattered elevation data. The performance of the GMRF interpolation model was tested on a set of LiDAR data (0.87 points/m2) provided by the Spanish Government (PNOA Programme) over a complex working area mainly covered by greenhouses in Almería, Spain. The original LiDAR data was decimated by randomly removing different fractions of the original points (from 10% to up to 99% of points removed). In every case, the remaining points (scattered observed points) were used to obtain a 1 m grid spacing GMRF-interpolated Digital Surface Model (DSM) whose accuracy was assessed by means of the set of previously extracted checkpoints. The GMRF accuracy results were compared with those provided by the widely known Triangulation with Linear Interpolation (TLI). Finally, the GMRF method was applied to a real-world case consisting of filling the LiDAR-derived DSM gaps after manually filtering out non-ground points to obtain a Digital Terrain Model (DTM). Regarding accuracy, both GMRF and TLI produced visually pleasing and similar results in terms of vertical accuracy. As an added bonus, the GMRF mathematical framework makes possible to both retrieve the estimated uncertainty for every interpolated elevation point (the DEM uncertainty) and include break lines or terrain discontinuities between adjacent cells to produce higher quality DTMs.
Markov stochasticity coordinates
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-01-01
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method-termed Markov Stochasticity Coordinates-is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Markov random field-based clustering applied to the segmentation of masses in digital mammograms.
Suliga, M; Deklerck, R; Nyssen, E
2008-09-01
In this paper we propose a new pixel clustering model applied to the analysis of digital mammograms. The clustering represents here the first step in a more general method and aims at the creation of a concise data-set (clusters) for automatic detection and classification of masses, which are typically among the first symptoms analysed in early diagnosis of breast cancer. For the purpose of this work, a set of mammographic images has been employed, that are 12-bit gray level digital scans and as such, are inherently inhomogeneous and affected by the noise resulting from the film scanning. The image pixels are described only by their intensity (gray level), therefore, the available information is limited to one dimension. We propose a Markov random field (MRF)-based technique that is suitable for performing clustering in an environment which is described by poor or limited data. The proposed method is a statistical classification model, that labels the image pixels based on the description of their statistical and contextual information. Apart from evaluating the pixel statistics, that originate from the definition of the K-means clustering scheme, the model expands the analysis by the description of the spatial dependence between pixels and their labels (context), hence leading to the reduction of the inhomogeneity of the output. Moreover, we define a probabilistic description of the model, that is characterised by a remarkable simplicity, such that its realisation can be easily and efficiently implemented in any high- or low-level programming language, thus allowing it to be run on virtually any kind of platform. Finally, we evaluate the algorithm against the classical K-means clustering routine. We point out similarities between the two methods and, moreover, show the advantages and superiority of the MRF scheme.
SAR-based change detection using hypothesis testing and Markov random field modelling
NASA Astrophysics Data System (ADS)
Cao, W.; Martinis, S.
2015-04-01
The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.
Spatial analysis in a Markov random field framework: The case of burning oil wells in Kuwait
NASA Astrophysics Data System (ADS)
Dezzani, Raymond J.; Al-Dousari, Ahmad
This paper discusses a modeling approach for spatial-temporal prediction of environmental phenomena using classified satellite images. This research was prompted by the analysis of change and landscape redistribution of petroleum residues formed from the residue of the burning oil wells in Kuwait (1991). These surface residues have been termed ``tarcrete'' (El-Baz etal. 1994). The tarcrete forms a thick layer over sand and desert pavement covering a significant portion of south-central Kuwait. The purpose of this study is to develop a method that utilizes satellite images from different time steps to examine the rate-of-change of the oil residue deposits and determine where redistribution is are likely to occur. This problem exhibits general characteristics of environmental diffusion and dispersion phenomena so a theoretical framework for a general solution is sought. The use of a lagged-clique, Markov random field framework and entropy measures is deduced to be an effective solution to satisfy the criteria of determination of time-rate-of-change of the surface deposits and to forecast likely locations of redistribution of dispersed, aggraded residues. The method minimally requires image classification, the determination of time stationarity of classes and the measurement of the level of organization of the state-space information derived from the images. Analysis occurs at levels of both the individual pixels and the system to determine specific states and suites of states in space and time. Convergence of the observed landscape disorder with respect to an analytical maximum provide information on the total dispersion of the residual system.
Jin, Ick Hoon; Yuan, Ying; Bandyopadhyay, Dipankar
2016-01-01
Research in dental caries generates data with two levels of hierarchy: that of a tooth overall and that of the different surfaces of the tooth. The outcomes often exhibit spatial referencing among neighboring teeth and surfaces, i.e., the disease status of a tooth or surface might be influenced by the status of a set of proximal teeth/surfaces. Assessments of dental caries (tooth decay) at the tooth level yield binary outcomes indicating the presence/absence of teeth, and trinary outcomes at the surface level indicating healthy, decayed, or filled surfaces. The presence of these mixed discrete responses complicates the data analysis under a unified framework. To mitigate complications, we develop a Bayesian two-level hierarchical model under suitable (spatial) Markov random field assumptions that accommodates the natural hierarchy within the mixed responses. At the first level, we utilize an autologistic model to accommodate the spatial dependence for the tooth-level binary outcomes. For the second level and conditioned on a tooth being non-missing, we utilize a Potts model to accommodate the spatial referencing for the surface-level trinary outcomes. The regression models at both levels were controlled for plausible covariates (risk factors) of caries, and remain connected through shared parameters. To tackle the computational challenges in our Bayesian estimation scheme caused due to the doubly-intractable normalizing constant, we employ a double Metropolis-Hastings sampler. We compare and contrast our model performances to the standard non-spatial (naive) model using a small simulation study, and illustrate via an application to a clinical dataset on dental caries. PMID:27807470
A Context-Recognition-Aided PDR Localization Method Based on the Hidden Markov Model
Lu, Yi; Wei, Dongyan; Lai, Qifeng; Li, Wen; Yuan, Hong
2016-01-01
Indoor positioning has recently become an important field of interest because global navigation satellite systems (GNSS) are usually unavailable in indoor environments. Pedestrian dead reckoning (PDR) is a promising localization technique for indoor environments since it can be implemented on widely used smartphones equipped with low cost inertial sensors. However, the PDR localization severely suffers from the accumulation of positioning errors, and other external calibration sources should be used. In this paper, a context-recognition-aided PDR localization model is proposed to calibrate PDR. The context is detected by employing particular human actions or characteristic objects and it is matched to the context pre-stored offline in the database to get the pedestrian’s location. The Hidden Markov Model (HMM) and Recursive Viterbi Algorithm are used to do the matching, which reduces the time complexity and saves the storage. In addition, the authors design the turn detection algorithm and take the context of corner as an example to illustrate and verify the proposed model. The experimental results show that the proposed localization method can fix the pedestrian’s starting point quickly and improves the positioning accuracy of PDR by 40.56% at most with perfect stability and robustness at the same time. PMID:27916922
Zhu, Yanjie; Tan, Yongqing; Hua, Yanqing; Zhang, Guozhen; Zhang, Jianguo
2012-06-01
Chest radiologists rely on the segmentation and quantificational analysis of ground-glass opacities (GGO) to perform imaging diagnoses that evaluate the disease severity or recovery stages of diffuse parenchymal lung diseases. However, it is computationally difficult to segment and analyze patterns of GGO while compared with other lung diseases, since GGO usually do not have clear boundaries. In this paper, we present a new approach which automatically segments GGO in lung computed tomography (CT) images using algorithms derived from Markov random field theory. Further, we systematically evaluate the performance of the algorithms in segmenting GGO in lung CT images under different situations. CT image studies from 41 patients with diffuse lung diseases were enrolled in this research. The local distributions were modeled with both simple and adaptive (AMAP) models of maximum a posteriori (MAP). For best segmentation, we used the simulated annealing algorithm with a Gibbs sampler to solve the combinatorial optimization problem of MAP estimators, and we applied a knowledge-guided strategy to reduce false positive regions. We achieved AMAP-based GGO segmentation results of 86.94%, 94.33%, and 94.06% in average sensitivity, specificity, and accuracy, respectively, and we evaluated the performance using radiologists' subjective evaluation and quantificational analysis and diagnosis. We also compared the results of AMAP-based GGO segmentation with those of support vector machine-based methods, and we discuss the reliability and other issues of AMAP-based GGO segmentation. Our research results demonstrate the acceptability and usefulness of AMAP-based GGO segmentation for assisting radiologists in detecting GGO in high-resolution CT diagnostic procedures.
Segmentation of lung lesions on CT scans using watershed, active contours, and Markov random field
Tan, Yongqiang; Schwartz, Lawrence H.; Zhao, Binsheng
2013-01-01
Purpose: Lung lesions vary considerably in size, density, and shape, and can attach to surrounding anatomic structures such as chest wall or mediastinum. Automatic segmentation of the lesions poses a challenge. This work communicates a new three-dimensional algorithm for the segmentation of a wide variety of lesions, ranging from tumors found in patients with advanced lung cancer to small nodules detected in lung cancer screening programs. Methods: The authors’ algorithm uniquely combines the image processing techniques of marker-controlled watershed, geometric active contours as well as Markov random field (MRF). The user of the algorithm manually selects a region of interest encompassing the lesion on a single slice and then the watershed method generates an initial surface of the lesion in three dimensions, which is refined by the active geometric contours. MRF improves the segmentation of ground glass opacity portions of part-solid lesions. The algorithm was tested on an anthropomorphic thorax phantom dataset and two publicly accessible clinical lung datasets. These clinical studies included a same-day repeat CT (prewalk and postwalk scans were performed within 15 min) dataset containing 32 lung lesions with one radiologist's delineated contours, and the first release of the Lung Image Database Consortium (LIDC) dataset containing 23 lung nodules with 6 radiologists’ delineated contours. The phantom dataset contained 22 phantom nodules of known volumes that were inserted in a phantom thorax. Results: For the prewalk scans of the same-day repeat CT dataset and the LIDC dataset, the mean overlap ratios of lesion volumes generated by the computer algorithm and the radiologist(s) were 69% and 65%, respectively. For the two repeat CT scans, the intra-class correlation coefficient (ICC) was 0.998, indicating high reliability of the algorithm. The mean relative difference was −3% for the phantom dataset. Conclusions: The performance of this new segmentation
Karchin, Rachel; Cline, Melissa; Mandel-Gutfreund, Yael; Karplus, Kevin
2003-06-01
An important problem in computational biology is predicting the structure of the large number of putative proteins discovered by genome sequencing projects. Fold-recognition methods attempt to solve the problem by relating the target proteins to known structures, searching for template proteins homologous to the target. Remote homologs that may have significant structural similarity are often not detectable by sequence similarities alone. To address this, we incorporated predicted local structure, a generalization of secondary structure, into two-track profile hidden Markov models (HMMs). We did not rely on a simple helix-strand-coil definition of secondary structure, but experimented with a variety of local structure descriptions, following a principled protocol to establish which descriptions are most useful for improving fold recognition and alignment quality. On a test set of 1298 nonhomologous proteins, HMMs incorporating a 3-letter STRIDE alphabet improved fold recognition accuracy by 15% over amino-acid-only HMMs and 23% over PSI-BLAST, measured by ROC-65 numbers. We compared two-track HMMs to amino-acid-only HMMs on a difficult alignment test set of 200 protein pairs (structurally similar with 3-24% sequence identity). HMMs with a 6-letter STRIDE secondary track improved alignment quality by 62%, relative to DALI structural alignments, while HMMs with an STR track (an expanded DSSP alphabet that subdivides strands into six states) improved by 40% relative to CE.
NASA Astrophysics Data System (ADS)
Tian, Yu-Kun; Zhou, Hui; Chen, Han-Ming; Zou, Ya-Ming; Guan, Shou-Jun
2013-12-01
Seismic inversion is a highly ill-posed problem, due to many factors such as the limited seismic frequency bandwidth and inappropriate forward modeling. To obtain a unique solution, some smoothing constraints, e.g., the Tikhonov regularization are usually applied. The Tikhonov method can maintain a global smooth solution, but cause a fuzzy structure edge. In this paper we use Huber-Markov random-field edge protection method in the procedure of inverting three parameters, P-velocity, S-velocity and density. The method can avoid blurring the structure edge and resist noise. For the parameter to be inverted, the Huber-Markov random-field constructs a neighborhood system, which further acts as the vertical and lateral constraints. We use a quadratic Huber edge penalty function within the layer to suppress noise and a linear one on the edges to avoid a fuzzy result. The effectiveness of our method is proved by inverting the synthetic data without and with noises. The relationship between the adopted constraints and the inversion results is analyzed as well.
Nosedal-Sanchez, Alvaro; Jackson, Charles S.; Huerta, Gabriel
2016-07-20
A new test statistic for climate model evaluation has been developed that potentially mitigates some of the limitations that exist for observing and representing field and space dependencies of climate phenomena. Traditionally such dependencies have been ignored when climate models have been evaluated against observational data, which makes it difficult to assess whether any given model is simulating observed climate for the right reasons. The new statistic uses Gaussian Markov random fields for estimating field and space dependencies within a first-order grid point neighborhood structure. We illustrate the ability of Gaussian Markov random fields to represent empirical estimates of fieldmore » and space covariances using "witch hat" graphs. We further use the new statistic to evaluate the tropical response of a climate model (CAM3.1) to changes in two parameters important to its representation of cloud and precipitation physics. Overall, the inclusion of dependency information did not alter significantly the recognition of those regions of parameter space that best approximated observations. However, there were some qualitative differences in the shape of the response surface that suggest how such a measure could affect estimates of model uncertainty.« less
NASA Astrophysics Data System (ADS)
Mota, Bernardo; Pereira, Jose; Campagnolo, Manuel; Killick, Rebeca
2013-04-01
Area burned in tropical savannas of Brazil was mapped using MODIS-AQUA daily 250m resolution imagery by adapting one of the European Space Agency fire_CCI project burned area algorithms, based on change point detection and Markov random fields. The study area covers 1,44 Mkm2 and was performed with data from 2005. The daily 1000 m image quality layer was used for cloud and cloud shadow screening. The algorithm addresses each pixel as a time series and detects changes in the statistical properties of NIR reflectance values, to identify potential burning dates. The first step of the algorithm is robust filtering, to exclude outlier observations, followed by application of the Pruned Exact Linear Time (PELT) change point detection technique. Near-infrared (NIR) spectral reflectance changes between time segments, and post change NIR reflectance values are combined into a fire likelihood score. Change points corresponding to an increase in reflectance are dismissed as potential burn events, as are those occurring outside of a pre-defined fire season. In the last step of the algorithm, monthly burned area probability maps and detection date maps are converted to dichotomous (burned-unburned maps) using Markov random fields, which take into account both spatial and temporal relations in the potential burned area maps. A preliminary assessment of our results is performed by comparison with data from the MODIS 1km active fires and the 500m burned area products, taking into account differences in spatial resolution between the two sensors.
Markov Chains for Random Urinalysis III: Daily Model and Drug Kinetics
1994-01-01
III: Daily M and Drug Kinetcs E L ’-- TF " 94-04546 9 4 2 0 9 0 6 2 Ap ,o.vod f r p c, tease cdsibutn Is un a ted NPRDC-TN-94-12 January 1994 Markov...and maintairi- 9 the d~ata needed. a~d corro’eting a-!d rev~ewing the collection of infrmt~ationi Seid conriments regarding this burden estimate or any...PERFORMiNG ORGANIZATION Navy Personnel Research -and Development Center REPORT NUMBER San Diego, CA 92152-7250 NPRDC-TN-94-12 9 . SPONSO R!NGIMO NTO
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.
2016-10-01
In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.
NASA Astrophysics Data System (ADS)
Bratsolis, E.; Sigelle, M.; Charou, E.
2016-10-01
Building detection has been a prominent area in the area of image classification. Most of the research effort is adapted to the specific application requirements and available datasets. Our dataset includes aerial orthophotos (with spatial resolution 20cm), a DSM generated from LiDAR (with spatial resolution 1m and elevation resolution 20 cm) and DTM (spatial resolution 2m) from an area of Athens, Greece. Our aim is to classify these data by means of Markov Random Fields (MRFs) in a Bayesian framework for building block extraction and perform a comparative analysis with other supervised classification techniques namely Feed Forward Neural Net (FFNN), Cascade-Correlation Neural Network (CCNN), Learning Vector Quantization (LVQ) and Support Vector Machines (SVM). We evaluated the performance of each method using a subset of the test area. We present the classified images, and statistical measures (confusion matrix, kappa coefficient and overall accuracy). Our results demonstrate that the MRFs and FFNN perform better than the other methods.
Monaco, James Peter; Madabhushi, Anant
2011-07-01
The ability of classification systems to adjust their performance (sensitivity/specificity) is essential for tasks in which certain errors are more significant than others. For example, mislabeling cancerous lesions as benign is typically more detrimental than mislabeling benign lesions as cancerous. Unfortunately, methods for modifying the performance of Markov random field (MRF) based classifiers are noticeably absent from the literature, and thus most such systems restrict their performance to a single, static operating point (a paired sensitivity/specificity). To address this deficiency we present weighted maximum posterior marginals (WMPM) estimation, an extension of maximum posterior marginals (MPM) estimation. Whereas the MPM cost function penalizes each error equally, the WMPM cost function allows misclassifications associated with certain classes to be weighted more heavily than others. This creates a preference for specific classes, and consequently a means for adjusting classifier performance. Realizing WMPM estimation (like MPM estimation) requires estimates of the posterior marginal distributions. The most prevalent means for estimating these--proposed by Marroquin--utilizes a Markov chain Monte Carlo (MCMC) method. Though Marroquin's method (M-MCMC) yields estimates that are sufficiently accurate for MPM estimation, they are inadequate for WMPM. To more accurately estimate the posterior marginals we present an equally simple, but more effective extension of the MCMC method (E-MCMC). Assuming an identical number of iterations, E-MCMC as compared to M-MCMC yields estimates with higher fidelity, thereby 1) allowing a far greater number and diversity of operating points and 2) improving overall classifier performance. To illustrate the utility of WMPM and compare the efficacies of M-MCMC and E-MCMC, we integrate them into our MRF-based classification system for detecting cancerous glands in (whole-mount or quarter) histological sections of the prostate.
NASA Astrophysics Data System (ADS)
Rocha, G.; Pagano, L.; Górski, K. M.; Huffenberger, K. M.; Lawrence, C. R.; Lange, A. E.
2010-04-01
We introduce a new method to propagate uncertainties in the beam shapes used to measure the cosmic microwave background to cosmological parameters determined from those measurements. The method, called markov chain beam randomization (MCBR), randomly samples from a set of templates or functions that describe the beam uncertainties. The method is much faster than direct numerical integration over systematic “nuisance” parameters, and is not restricted to simple, idealized cases as is analytic marginalization. It does not assume the data are normally distributed, and does not require Gaussian priors on the specific systematic uncertainties. We show that MCBR properly accounts for and provides the marginalized errors of the parameters. The method can be generalized and used to propagate any systematic uncertainties for which a set of templates is available. We apply the method to the Planck satellite, and consider future experiments. Beam measurement errors should have a small effect on cosmological parameters as long as the beam fitting is performed after removal of 1/f noise.
NASA Astrophysics Data System (ADS)
Kolesnik, Alexander D.
2017-01-01
We consider the Markov random flight \\varvec{X}(t), t>0, in the three-dimensional Euclidean space R3 with constant finite speed c>0 and the uniform choice of the initial and each new direction at random time instants that form a homogeneous Poisson flow of rate λ >0. Series representations for the conditional characteristic functions of \\varvec{X}(t) corresponding to two and three changes of direction, are obtained. Based on these results, an asymptotic formula, as t→ 0, for the unconditional characteristic function of \\varvec{X}(t) is derived. By inverting it, we obtain an asymptotic relation for the transition density of the process. We show that the error in this formula has the order o(t^3) and, therefore, it gives a good approximation on small time intervals whose lengths depend on λ . An asymptotic formula, as t→ 0, for the probability of being in a three-dimensional ball of radius r
NASA Astrophysics Data System (ADS)
Hu, B.; Li, P.
2013-07-01
Markov random field (MRF) is an effective method for description of local spatial-temporal dependence of image and has been widely used in land cover classification and change detection. However, existing studies only use pair-point clique (PPC) to describe spatial dependence of neighbouring pixels, which may not fully quantify complex spatial relations, particularly in high spatial resolution images. In this study, multi-point clique (MPC) is adopted in MRF model to quantitatively express spatial dependence among pixels. A modified least squares fit (LSF) method based on robust estimation is proposed to calculate potential parameters for MRF models with different types. The proposed MPC-MRF method is evaluated and quantitatively compared with traditional PPCMRF in urban land cover classification using high resolution hyperspectral HYDICE data of Washington DC. The experimental results revealed that the proposed MPC-MRF method outperformed the traditional PPC-MRF method in terms of classification details. The MPC-MRF provides a sophisticated way of describing complex spatial dependence for relevant applications.
Abdulbaqi, Hayder Saad; Jafri, Mohd Zubir Mat; Omar, Ahmad Fairuz; Mustafa, Iskandar Shahrim Bin; Abood, Loay Kadom
2015-04-24
Brain tumors, are an abnormal growth of tissues in the brain. They may arise in people of any age. They must be detected early, diagnosed accurately, monitored carefully, and treated effectively in order to optimize patient outcomes regarding both survival and quality of life. Manual segmentation of brain tumors from CT scan images is a challenging and time consuming task. Size and location accurate detection of brain tumor plays a vital role in the successful diagnosis and treatment of tumors. Brain tumor detection is considered a challenging mission in medical image processing. The aim of this paper is to introduce a scheme for tumor detection in CT scan images using two different techniques Hidden Markov Random Fields (HMRF) and Fuzzy C-means (FCM). The proposed method has been developed in this research in order to construct hybrid method between (HMRF) and threshold. These methods have been applied on 4 different patient data sets. The result of comparison among these methods shows that the proposed method gives good results for brain tissue detection, and is more robust and effective compared with (FCM) techniques.
Lu, Yisu; Jiang, Jun; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064
Blanchet, Juliette; Vignes, Matthieu
2009-03-01
The different measurement techniques that interrogate biological systems provide means for monitoring the behavior of virtually all cell components at different scales and from complementary angles. However, data generated in these experiments are difficult to interpret. A first difficulty arises from high-dimensionality and inherent noise of such data. Organizing them into meaningful groups is then highly desirable to improve our knowledge of biological mechanisms. A more accurate picture can be obtained when accounting for dependencies between components (e.g., genes) under study. A second difficulty arises from the fact that biological experiments often produce missing values. When it is not ignored, the latter issue has been solved by imputing the expression matrix prior to applying traditional analysis methods. Although helpful, this practice can lead to unsound results. We propose in this paper a statistical methodology that integrates individual dependencies in a missing data framework. More explicitly, we present a clustering algorithm dealing with incomplete data in a Hidden Markov Random Field context. This tackles the missing value issue in a probabilistic framework and still allows us to reconstruct missing observations a posteriori without imposing any pre-processing of the data. Experiments on synthetic data validate the gain in using our method, and analysis of real biological data shows its potential to extract biological knowledge.
Robinson, Sean; Guyon, Laurent; Nevalainen, Jaakko; Toriseva, Mervi
2015-01-01
Organotypic, three dimensional (3D) cell culture models of epithelial tumour types such as prostate cancer recapitulate key aspects of the architecture and histology of solid cancers. Morphometric analysis of multicellular 3D organoids is particularly important when additional components such as the extracellular matrix and tumour microenvironment are included in the model. The complexity of such models has so far limited their successful implementation. There is a great need for automatic, accurate and robust image segmentation tools to facilitate the analysis of such biologically relevant 3D cell culture models. We present a segmentation method based on Markov random fields (MRFs) and illustrate our method using 3D stack image data from an organotypic 3D model of prostate cancer cells co-cultured with cancer-associated fibroblasts (CAFs). The 3D segmentation output suggests that these cell types are in physical contact with each other within the model, which has important implications for tumour biology. Segmentation performance is quantified using ground truth labels and we show how each step of our method increases segmentation accuracy. We provide the ground truth labels along with the image data and code. Using independent image data we show that our segmentation method is also more generally applicable to other types of cellular microscopy and not only limited to fluorescence microscopy. PMID:26630674
NASA Astrophysics Data System (ADS)
Abdulbaqi, Hayder Saad; Jafri, Mohd Zubir Mat; Omar, Ahmad Fairuz; Mustafa, Iskandar Shahrim Bin; Abood, Loay Kadom
2015-04-01
Brain tumors, are an abnormal growth of tissues in the brain. They may arise in people of any age. They must be detected early, diagnosed accurately, monitored carefully, and treated effectively in order to optimize patient outcomes regarding both survival and quality of life. Manual segmentation of brain tumors from CT scan images is a challenging and time consuming task. Size and location accurate detection of brain tumor plays a vital role in the successful diagnosis and treatment of tumors. Brain tumor detection is considered a challenging mission in medical image processing. The aim of this paper is to introduce a scheme for tumor detection in CT scan images using two different techniques Hidden Markov Random Fields (HMRF) and Fuzzy C-means (FCM). The proposed method has been developed in this research in order to construct hybrid method between (HMRF) and threshold. These methods have been applied on 4 different patient data sets. The result of comparison among these methods shows that the proposed method gives good results for brain tissue detection, and is more robust and effective compared with (FCM) techniques.
A Fast Variational Approach for Learning Markov Random Field Language Models
2015-01-01
to either resort to local optimiza- tion methods, such as those used in neural lan- guage models, or work with heavily constrained distributions. In...embeddings learned through neural language models. Central to the language modelling problem is the challenge Proceedings of the 32nd International...of parameters. More recently neural language models (NLMs) have gained popularity (Bengio et al., 2006; Mnih & Hinton, 2007). These models estimate
A Markov model for the temporal dynamics of balanced random networks of finite size
Lagzi, Fereshteh; Rotter, Stefan
2014-01-01
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between
NASA Astrophysics Data System (ADS)
Foulkes, Stephen B.; Booth, David M.
1997-07-01
Object segmentation is the process by which a mask is generated which identifies the area of an image which is occupied by an object. Many object recognition techniques depend on the quality of such masks for shape and underlying brightness information, however, segmentation remains notoriously unreliable. This paper considers how the image restoration technique of Geman and Geman can be applied to the improvement of object segmentations generated by a locally adaptive background subtraction technique. Also presented is how an artificial neural network hybrid, consisting of a single layer Kohonen network with each of its nodes connected to a different multi-layer perceptron, can be used to approximate the image restoration process. It is shown that the restoration techniques are very well suited for parallel processing and in particular the artificial neural network hybrid has the potential for near real time image processing. Results are presented for the detection of ships in SPOT panchromatic imagery and the detection of vehicles in infrared linescan images, these being a fair representation of the wider class of problem.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
NASA Astrophysics Data System (ADS)
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-08-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores
2015-09-16
One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot's pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area.
Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores
2015-01-01
One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot’s pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area. PMID:26389914
Lin, Yen-Jen; Chen, Yu-Tin; Hsu, Shu-Ni; Peng, Chien-Hua; Tang, Chuan-Yi; Yen, Tzu-Chen; Hsieh, Wen-Ping
2014-01-01
Copy number variation (CNV) has been reported to be associated with disease and various cancers. Hence, identifying the accurate position and the type of CNV is currently a critical issue. There are many tools targeting on detecting CNV regions, constructing haplotype phases on CNV regions, or estimating the numerical copy numbers. However, none of them can do all of the three tasks at the same time. This paper presents a method based on Hidden Markov Model to detect parent specific copy number change on both chromosomes with signals from SNP arrays. A haplotype tree is constructed with dynamic branch merging to model the transition of the copy number status of the two alleles assessed at each SNP locus. The emission models are constructed for the genotypes formed with the two haplotypes. The proposed method can provide the segmentation points of the CNV regions as well as the haplotype phasing for the allelic status on each chromosome. The estimated copy numbers are provided as fractional numbers, which can accommodate the somatic mutation in cancer specimens that usually consist of heterogeneous cell populations. The algorithm is evaluated on simulated data and the previously published regions of CNV of the 270 HapMap individuals. The results were compared with five popular methods: PennCNV, genoCN, COKGEN, QuantiSNP and cnvHap. The application on oral cancer samples demonstrates how the proposed method can facilitate clinical association studies. The proposed algorithm exhibits comparable sensitivity of the CNV regions to the best algorithm in our genome-wide study and demonstrates the highest detection rate in SNP dense regions. In addition, we provide better haplotype phasing accuracy than similar approaches. The clinical association carried out with our fractional estimate of copy numbers in the cancer samples provides better detection power than that with integer copy number states.
Mina, Marco; Guzzi, Pietro Hiram
2014-01-01
The analysis of protein behavior at the network level had been applied to elucidate the mechanisms of protein interaction that are similar in different species. Published network alignment algorithms proved to be able to recapitulate known conserved modules and protein complexes, and infer new conserved interactions confirmed by wet lab experiments. In the meantime, however, a plethora of continuously evolving protein-protein interaction (PPI) data sets have been developed, each featuring different levels of completeness and reliability. For instance, algorithms performance may vary significantly when changing the data set used in their assessment. Moreover, existing papers did not deeply investigate the robustness of alignment algorithms. For instance, some algorithms performances vary significantly when changing the data set used in their assessment. In this work, we design an extensive assessment of current algorithms discussing the robustness of the results on the basis of input networks. We also present AlignMCL, a local network alignment algorithm based on an improved model of alignment graph and Markov Clustering. AlignMCL performs better than other state-of-the-art local alignment algorithms over different updated data sets. In addition, AlignMCL features high levels of robustness, producing similar results regardless the selected data set.
Localized motion in random matrix decomposition of complex financial systems
NASA Astrophysics Data System (ADS)
Jiang, Xiong-Fei; Zheng, Bo; Ren, Fei; Qiu, Tian
2017-04-01
With the random matrix theory, we decompose the multi-dimensional time series of complex financial systems into a set of orthogonal eigenmode functions, which are classified into the market mode, sector mode, and random mode. In particular, the localized motion generated by the business sectors, plays an important role in financial systems. Both the business sectors and their impact on the stock market are identified from the localized motion. We clarify that the localized motion induces different characteristics of the time correlations for the stock-market index and individual stocks. With a variation of a two-factor model, we reproduce the return-volatility correlations of the eigenmodes.
Menke, Matt; Berger, Bonnie; Cowen, Lenore
2010-01-01
The recent explosion in newly sequenced bacterial genomes is outpacing the capacity of researchers to try to assign functional annotation to all the new proteins. Hence, computational methods that can help predict structural motifs provide increasingly important clues in helping to determine how these proteins might function. We introduce a Markov Random Field approach tailored for recognizing proteins that fold into mainly β-structural motifs, and apply it to build recognizers for the β-propeller shapes. As an application, we identify a potential class of hybrid two-component sensor proteins, that we predict contain a double-propeller domain. PMID:20147619
Protein localization prediction using random walks on graphs
2013-01-01
Background Understanding the localization of proteins in cells is vital to characterizing their functions and possible interactions. As a result, identifying the (sub)cellular compartment within which a protein is located becomes an important problem in protein classification. This classification issue thus involves predicting labels in a dataset with a limited number of labeled data points available. By utilizing a graph representation of protein data, random walk techniques have performed well in sequence classification and functional prediction; however, this method has not yet been applied to protein localization. Accordingly, we propose a novel classifier in the site prediction of proteins based on random walks on a graph. Results We propose a graph theory model for predicting protein localization using data generated in yeast and gram-negative (Gneg) bacteria. We tested the performance of our classifier on the two datasets, optimizing the model training parameters by varying the laziness values and the number of steps taken during the random walk. Using 10-fold cross-validation, we achieved an accuracy of above 61% for yeast data and about 93% for gram-negative bacteria. Conclusions This study presents a new classifier derived from the random walk technique and applies this classifier to investigate the cellular localization of proteins. The prediction accuracy and additional validation demonstrate an improvement over previous methods, such as support vector machine (SVM)-based classifiers. PMID:23815126
Many-body localization due to random interactions
NASA Astrophysics Data System (ADS)
Sierant, Piotr; Delande, Dominique; Zakrzewski, Jakub
2017-02-01
The possibility of observing many-body localization of ultracold atoms in a one-dimensional optical lattice is discussed for random interactions. In the noninteracting limit, such a system reduces to single-particle physics in the absence of disorder, i.e., to extended states. In effect, the observed localization is inherently due to interactions and is thus a genuine many-body effect. In the system studied, many-body localization manifests itself in a lack of thermalization visible in temporal propagation of a specially prepared initial state, in transport properties, in the logarithmic growth of entanglement entropy, and in statistical properties of energy levels.
Random matrix analysis of localization properties of gene coexpression network.
Jalan, Sarika; Solymosi, Norbert; Vattay, Gábor; Li, Baowen
2010-04-01
We analyze gene coexpression network under the random matrix theory framework. The nearest-neighbor spacing distribution of the adjacency matrix of this network follows Gaussian orthogonal statistics of random matrix theory (RMT). Spectral rigidity test follows random matrix prediction for a certain range and deviates afterwards. Eigenvector analysis of the network using inverse participation ratio suggests that the statistics of bulk of the eigenvalues of network is consistent with those of the real symmetric random matrix, whereas few eigenvalues are localized. Based on these IPR calculations, we can divide eigenvalues in three sets: (a) The nondegenerate part that follows RMT. (b) The nondegenerate part, at both ends and at intermediate eigenvalues, which deviates from RMT and expected to contain information about important nodes in the network. (c) The degenerate part with zero eigenvalue, which fluctuates around RMT-predicted value. We identify nodes corresponding to the dominant modes of the corresponding eigenvectors and analyze their structural properties.
Localization of disordered bosons and magnets in random fields
Yu, Xiaoquan; Müller, Markus
2013-10-15
We study localization properties of disordered bosons and spins in random fields at zero temperature. We focus on two representatives of different symmetry classes, hard-core bosons (XY magnets) and Ising magnets in random transverse fields, and contrast their physical properties. We describe localization properties using a locator expansion on general lattices. For 1d Ising chains, we find non-analytic behavior of the localization length as a function of energy at ω=0, ξ{sup −1}(ω)=ξ{sup −1}(0)+A|ω|{sup α}, with α vanishing at criticality. This contrasts with the much smoother behavior predicted for XY magnets. We use these results to approach the ordering transition on Bethe lattices of large connectivity K, which mimic the limit of high dimensionality. In both models, in the paramagnetic phase with uniform disorder, the localization length is found to have a local maximum at ω=0. For the Ising model, we find activated scaling at the phase transition, in agreement with infinite randomness studies. In the Ising model long range order is found to arise due to a delocalization and condensation initiated at ω=0, without a closing mobility gap. We find that Ising systems establish order on much sparser (fractal) subgraphs than XY models. Possible implications of these results for finite-dimensional systems are discussed. -- Highlights: •Study of localization properties of disordered bosons and spins in random fields. •Comparison between XY magnets (hard-core bosons) and Ising magnets. •Analysis of the nature of the magnetic transition in strong quenched disorder. •Ising magnets: activated scaling, no closing mobility gap at the transition. •Ising order emerges on sparser (fractal) support than XY order.
Randomized study of phentolamine mesylate for reversal of local anesthesia.
Laviola, M; McGavin, S K; Freer, G A; Plancich, G; Woodbury, S C; Marinkovich, S; Morrison, R; Reader, A; Rutherford, R B; Yagiela, J A
2008-07-01
Local anesthetic solutions frequently contain vasoconstrictors to increase the depth and/or duration of anesthesia. Generally, the duration of soft-tissue anesthesia exceeds that of pulpal anesthesia. Negative consequences of soft-tissue anesthesia include accidental lip and tongue biting as well as difficulty in eating, drinking, speaking, and smiling. A double-blind, randomized, multicenter, Phase 2 study tested the hypothesis that local injection of the vasodilator phentolamine mesylate would shorten the duration of soft-tissue anesthesia following routine dental procedures. Participants (122) received one or two cartridges of local anesthetic/vasoconstrictor prior to dental treatment. Immediately after treatment, 1.8 mL of study drug (containing 0.4 mg phentolamine mesylate or placebo) was injected per cartridge of local anesthetic used. The phentolamine was well-tolerated and reduced the median duration of soft-tissue anesthesia in the lip from 155 to 70 min (p < 0.0001).
NASA Astrophysics Data System (ADS)
Wang, Kang-Ning; Sun, Zan-Dong; Dong, Ning
2015-12-01
Economic shale gas production requires hydraulic fracture stimulation to increase the formation permeability. Hydraulic fracturing strongly depends on geomechanical parameters such as Young's modulus and Poisson's ratio. Fracture-prone sweet spots can be predicted by prestack inversion, which is an ill-posed problem; thus, regularization is needed to obtain unique and stable solutions. To characterize gas-bearing shale sedimentary bodies, elastic parameter variations are regarded as an anisotropic Markov random field. Bayesian statistics are adopted for transforming prestack inversion to the maximum posterior probability. Two energy functions for the lateral and vertical directions are used to describe the distribution, and the expectation-maximization algorithm is used to estimate the hyperparameters of the prior probability of elastic parameters. Finally, the inversion yields clear geological boundaries, high vertical resolution, and reasonable lateral continuity using the conjugate gradient method to minimize the objective function. Antinoise and imaging ability of the method were tested using synthetic and real data.
Local Spin Relaxation within the Random Heisenberg Chain
NASA Astrophysics Data System (ADS)
Herbrych, J.; Kokalj, J.; Prelovšek, P.
2013-10-01
Finite-temperature local dynamical spin correlations Snn(ω) are studied numerically within the random spin-1/2 antiferromagnetic Heisenberg chain. The aim is to explain measured NMR spin-lattice relaxation times in BaCu2(Si0.5Ge0.5)2O7, which is the realization of a random spin chain. In agreement with experiments we find that the distribution of relaxation times within the model shows a very large span similar to the stretched-exponential form. The distribution is strongly reduced with increasing T, but stays finite also in the high-T limit. Anomalous dynamical correlations can be associated with the random singlet concept but not directly with static quantities. Our results also reveal the crucial role of the spin anisotropy (interaction), since the behavior is in contrast with the ones for the XX model, where we do not find any significant T dependence of the distribution.
Non-local MRI denoising using random sampling.
Hu, Jinrong; Zhou, Jiliu; Wu, Xi
2016-09-01
In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's.
Localization of random acoustic sources in an inhomogeneous medium
NASA Astrophysics Data System (ADS)
Khazaie, Shahram; Wang, Xun; Sagaut, Pierre
2016-12-01
In this paper, the localization of a random sound source via different source localization methods is considered, the emphasis being put on the robustness and the accuracy of classical methods in the presence of uncertainties. The sound source position is described by a random variable and the sound propagation medium is assumed to have spatially varying parameters with known values. Two approaches are used for the source identification: time reversal and beamforming. The probability density functions of the random source position are estimated using both methods. The focal spot resolutions of the time reversal estimates are also evaluated. In the numerical simulations, two media with different correlation lengths are investigated to account for two different scattering regimes: one has a correlation length relatively larger than the wavelength and the other has a correlation length comparable to the wavelength. The results show that the required sound propagation time and source estimation robustness highly depend on the ratio between the correlation length and the wavelength. It is observed that source identification methods have different robustness in the presence of uncertainties. Advantages and weaknesses of each method are discussed.
Adaptive Local Information Transfer in Random Boolean Networks.
Haruna, Taichi
2017-01-01
Living systems such as gene regulatory networks and neuronal networks have been supposed to work close to dynamical criticality, where their information-processing ability is optimal at the whole-system level. We investigate how this global information-processing optimality is related to the local information transfer at each individual-unit level. In particular, we introduce an internal adjustment process of the local information transfer and examine whether the former can emerge from the latter. We propose an adaptive random Boolean network model in which each unit rewires its incoming arcs from other units to balance stability of its information processing based on the measurement of the local information transfer pattern. First, we show numerically that random Boolean networks can self-organize toward near dynamical criticality in our model. Second, the proposed model is analyzed by a mean-field theory. We recognize that the rewiring rule has a bootstrapping feature. The stationary indegree distribution is calculated semi-analytically and is shown to be close to dynamical criticality in a broad range of model parameter values.
Localization in Interacting Fermionic Chains with Quasi-Random Disorder
NASA Astrophysics Data System (ADS)
Mastropietro, Vieri
2017-04-01
We consider a system of fermions with a quasi-random almost-Mathieu disorder interacting through a many-body short range potential. We establish exponential decay of the zero temperature correlations, indicating localization of the interacting ground state, for weak hopping and interaction and almost everywhere in the frequency and phase; this extends the analysis in Mastropietro (Commun Math Phys 342(1):217-250, 2016) to chemical potentials outside spectral gaps. The proof is based on Renormalization Group and it is inspired by techniques developed to deal with KAM Lindstedt series.
Li, Hong-Dong; Xu, Qing-Song; Liang, Yi-Zeng
2012-08-31
The identification of disease-relevant genes represents a challenge in microarray-based disease diagnosis where the sample size is often limited. Among established methods, reversible jump Markov Chain Monte Carlo (RJMCMC) methods have proven to be quite promising for variable selection. However, the design and application of an RJMCMC algorithm requires, for example, special criteria for prior distributions. Also, the simulation from joint posterior distributions of models is computationally extensive, and may even be mathematically intractable. These disadvantages may limit the applications of RJMCMC algorithms. Therefore, the development of algorithms that possess the advantages of RJMCMC methods and are also efficient and easy to follow for selecting disease-associated genes is required. Here we report a RJMCMC-like method, called random frog that possesses the advantages of RJMCMC methods and is much easier to implement. Using the colon and the estrogen gene expression datasets, we show that random frog is effective in identifying discriminating genes. The top 2 ranked genes for colon and estrogen are Z50753, U00968, and Y10871_at, Z22536_at, respectively. (The source codes with GNU General Public License Version 2.0 are freely available to non-commercial users at: http://code.google.com/p/randomfrog/.).
Local random potentials of high differentiability to model the Landscape
Battefeld, T.; Modi, C.
2015-03-09
We generate random functions locally via a novel generalization of Dyson Brownian motion, such that the functions are in a desired differentiability class C{sup k}, while ensuring that the Hessian is a member of the Gaussian orthogonal ensemble (other ensembles might be chosen if desired). Potentials in such higher differentiability classes (k≥2) are required/desirable to model string theoretical landscapes, for instance to compute cosmological perturbations (e.g., k=2 for the power-spectrum) or to search for minima (e.g., suitable de Sitter vacua for our universe). Since potentials are created locally, numerical studies become feasible even if the dimension of field space is large (D∼100). In addition to the theoretical prescription, we provide some numerical examples to highlight properties of such potentials; concrete cosmological applications will be discussed in companion publications.
Localization transition of stiff directed lines in random media.
Boltz, Horst-Holger; Kierfeld, Jan
2012-12-01
We investigate the localization of stiff directed lines with bending energy by a short-range random potential. Using perturbative arguments, Flory arguments, and a replica calculation, we show that a stiff directed line in 1+d dimensions undergoes a localization transition with increasing disorder for d>2/3. We demonstrate that this transition is accessible by numerical transfer matrix calculations in 1+1 dimensions and analyze the properties of the disorder-dominated phase. On the basis of the two-replica problem, we propose a relation between the localization of stiff directed lines in 1+d dimensions and of directed lines under tension in 1+3d dimensions, which is strongly supported by identical free energy distributions. This shows that pair interactions in the replicated Hamiltonian determine the nature of directed line localization transitions with consequences for the critical behavior of the Kardar-Parisi-Zhang (KPZ) equation. Furthermore, we quantify how the persistence length of the stiff directed line is reduced by disorder.
Raberto, Marco; Rapallo, Fabio; Scalas, Enrico
2011-01-01
In this paper, we outline a model of graph (or network) dynamics based on two ingredients. The first ingredient is a Markov chain on the space of possible graphs. The second ingredient is a semi-Markov counting process of renewal type. The model consists in subordinating the Markov chain to the semi-Markov counting process. In simple words, this means that the chain transitions occur at random time instants called epochs. The model is quite rich and its possible connections with algebraic geometry are briefly discussed. Moreover, for the sake of simplicity, we focus on the space of undirected graphs with a fixed number of nodes. However, in an example, we present an interbank market model where it is meaningful to use directed graphs or even weighted graphs. PMID:21887245
No-signaling, perfect bipartite dichotomic correlations and local randomness
Seevinck, M. P.
2011-03-28
The no-signaling constraint on bi-partite correlations is reviewed. It is shown that in order to obtain non-trivial Bell-type inequalities that discern no-signaling correlations from more general ones, one must go beyond considering expectation values of products of observables only. A new set of nontrivial no-signaling inequalities is derived which have a remarkably close resemblance to the CHSH inequality, yet are fundamentally different. A set of inequalities by Roy and Singh and Avis et al., which is claimed to be useful for discerning no-signaling correlations, is shown to be trivially satisfied by any correlation whatsoever. Finally, using the set of newly derived no-signaling inequalities a result with potential cryptographic consequences is proven: if different parties use identical devices, then, once they have perfect correlations at spacelike separation between dichotomic observables, they know that because of no-signaling the local marginals cannot but be completely random.
Kulkarni, Ramaprasad; Tuller, Markus; Fink, Wolfgang; Wildschild, Dorthe
2012-07-27
Advancements in noninvasive imaging methods such as X-ray computed tomography (CT) have led to a recent surge of applications in porous media research with objectives ranging from theoretical aspects of pore-scale fluid and interfacial dynamics to practical applications such as enhanced oil recovery and advanced contaminant remediation. While substantial efforts and resources have been devoted to advance CT technology, microscale analysis, and fluid dynamics simulations, the development of efficient and stable three-dimensional multiphase image segmentation methods applicable to large data sets is lacking. To eliminate the need for wet-dry or dual-energy scans, image alignment, and subtraction analysis, commonly applied in X-ray micro-CT, a segmentation method based on a Bayesian Markov random field (MRF) framework amenable to true three-dimensional multiphase processing was developed and evaluated. Furthermore, several heuristic and deterministic combinatorial optimization schemes required to solve the labeling problem of the MRF image model were implemented and tested for computational efficiency and their impact on segmentation results. Test results for three grayscale data sets consisting of dry glass beads, partially saturated glass beads, and partially saturated crushed tuff obtained with synchrotron X-ray micro-CT demonstrate great potential of the MRF image model for three-dimensional multiphase segmentation. While our results are promising and the developed algorithm is stable and computationally more efficient than other commonly applied porous media segmentation models, further potential improvements exist for fully automated operation.
Shi, Xu; Barnes, Robert O.; Chen, Li; Shajahan-Haq, Ayesha N.; Hilakivi-Clarke, Leena; Clarke, Robert; Wang, Yue; Xuan, Jianhua
2015-01-01
Summary: Identification of protein interaction subnetworks is an important step to help us understand complex molecular mechanisms in cancer. In this paper, we develop a BMRF-Net package, implemented in Java and C++, to identify protein interaction subnetworks based on a bagging Markov random field (BMRF) framework. By integrating gene expression data and protein–protein interaction data, this software tool can be used to identify biologically meaningful subnetworks. A user friendly graphic user interface is developed as a Cytoscape plugin for the BMRF-Net software to deal with the input/output interface. The detailed structure of the identified networks can be visualized in Cytoscape conveniently. The BMRF-Net package has been applied to breast cancer data to identify significant subnetworks related to breast cancer recurrence. Availability and implementation: The BMRF-Net package is available at http://sourceforge.net/projects/bmrfcjava/. The package is tested under Ubuntu 12.04 (64-bit), Java 7, glibc 2.15 and Cytoscape 3.1.0. Contact: xuan@vt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25755273
Ashraf, Ahmed B; Gavenonis, Sara; Daye, Dania; Mies, Carolyn; Feldman, Michael; Rosen, Mark; Kontos, Despina
2011-01-01
We present a multichannel extension of Markov random fields (MRFs) for incorporating multiple feature streams in the MRF model. We prove that for making inference queries, any multichannel MRF can be reduced to a single channel MRF provided features in different channels are conditionally independent given the hidden variable, Using this result we incorporate kinetic feature maps derived from breast DCE MRI into the observation model of MRF for tumor segmentation. Our algorithm achieves an ROC AUC of 0.97 for tumor segmentation, We present a comparison against the commonly used approach of fuzzy C-means (FCM) and the more recent method of running FCM on enhancement variance features (FCM-VES). These previous methods give a lower AUC of 0.86 and 0.60 respectively, indicating the superiority of our algorithm. Finally, we investigate the effect of superior segmentation on predicting breast cancer recurrence using kinetic DCE MRI features from the segmented tumor regions. A linear prediction model shows significant prediction improvement when segmenting the tumor using the proposed method, yielding a correlation coefficient r = 0.78 (p < 0.05) to validated cancer recurrence probabilities, compared to 0.63 and 0.45 when using FCM and FCM-VES respectively.
Nielsen, Rasmus
2017-01-01
Admixture—the mixing of genomes from divergent populations—is increasingly appreciated as a central process in evolution. To characterize and quantify patterns of admixture across the genome, a number of methods have been developed for local ancestry inference. However, existing approaches have a number of shortcomings. First, all local ancestry inference methods require some prior assumption about the expected ancestry tract lengths. Second, existing methods generally require genotypes, which is not feasible to obtain for many next-generation sequencing projects. Third, many methods assume samples are diploid, however a wide variety of sequencing applications will fail to meet this assumption. To address these issues, we introduce a novel hidden Markov model for estimating local ancestry that models the read pileup data, rather than genotypes, is generalized to arbitrary ploidy, and can estimate the time since admixture during local ancestry inference. We demonstrate that our method can simultaneously estimate the time since admixture and local ancestry with good accuracy, and that it performs well on samples of high ploidy—i.e. 100 or more chromosomes. As this method is very general, we expect it will be useful for local ancestry inference in a wider variety of populations than what previously has been possible. We then applied our method to pooled sequencing data derived from populations of Drosophila melanogaster on an ancestry cline on the east coast of North America. We find that regions of local recombination rates are negatively correlated with the proportion of African ancestry, suggesting that selection against foreign ancestry is the least efficient in low recombination regions. Finally we show that clinal outlier loci are enriched for genes associated with gene regulatory functions, consistent with a role of regulatory evolution in ecological adaptation of admixed D. melanogaster populations. Our results illustrate the potential of local ancestry
Randomized discrepancy bounded local search for transmission expansion planning
Bent, Russell W; Daniel, William B
2010-11-23
In recent years the transmission network expansion planning problem (TNEP) has become increasingly complex. As the TNEP is a non-linear and non-convex optimization problem, researchers have traditionally focused on approximate models of power flows to solve the TNEP. Existing approaches are often tightly coupled to the approximation choice. Until recently these approximations have produced results that are straight-forward to adapt to the more complex (real) problem. However, the power grid is evolving towards a state where the adaptations are no longer easy (e.g. large amounts of limited control, renewable generation) and necessitates new approaches. Recent work on deterministic Discrepancy Bounded Local Search (DBLS) has shown it to be quite effective in addressing this question. DBLS encapsulates the complexity of power flow modeling in a black box that may be queried for information about the quality of proposed expansions. In this paper, we propose a randomization strategy that builds on DBLS and dramatically increases the computational efficiency of the algorithm.
Anderson localization and ergodicity on random regular graphs
NASA Astrophysics Data System (ADS)
Tikhonov, K. Â. S.; Mirlin, A. Â. D.; Skvortsov, M. Â. A.
2016-12-01
A numerical study of Anderson transition on random regular graphs (RRGs) with diagonal disorder is performed. The problem can be described as a tight-binding model on a lattice with N sites that is locally a tree with constant connectivity. In a certain sense, the RRG ensemble can be seen as an infinite-dimensional (d →∞ ) cousin of the Anderson model in d dimensions. We focus on the delocalized side of the transition and stress the importance of finite-size effects. We show that the data can be interpreted in terms of the finite-size crossover from a small (N ≪Nc ) to a large (N ≫Nc ) system, where Nc is the correlation volume diverging exponentially at the transition. A distinct feature of this crossover is a nonmonotonicity of the spectral and wave-function statistics, which is related to properties of the critical phase in the studied model and renders the finite-size analysis highly nontrivial. Our results support an analytical prediction that states in the delocalized phase (and at N ≫Nc ) are ergodic in the sense that their inverse participation ratio scales as 1 /N .
NASA Astrophysics Data System (ADS)
Welikanna, D. R.; Tamura, M.; Susaki, J.
2014-09-01
A Markov Random Field (MRF) model accounting for the classification uncertainty using multisource satellite images and an adaptive fuzzy class mean vector is proposed in this study. The work also highlights the initialization of the class values for an MRF based classification for synthetic aperture radar (SAR) images using optical data. The model uses the contextual information from the optical image pixels and the SAR pixel intensity with corresponding fuzzy grade of memberships respectively, in the classification mechanism. Sub pixel class fractions estimated using Singular Value Decomposition (SVD) from the optical image initializes the class arrangement for the MRF process. Pair-site interactions of the pixels are used to model the prior energy from the initial class arrangement. Fuzzy class mean vector from the SAR intensity pixels is calculated using Fuzzy C-means (FCM) partitioning. Conditional probability for each class was determined by a Gamma distribution for the SAR image. Simulated annealing (SA) to minimize the global energy was executed using a logarithmic and power-law combined annealing schedule. Proposed technique was tested using an Advanced Land Observation Satellite (ALOS) phased array type L-band SAR (PALSAR) and Advanced Visible and Near-Infrared Radiometer-2 (AVNIR-2) data set over a disaster effected urban region in Japan. Proposed method and the conventional MRF results were evaluated with neural network (NN) and support vector machine (SVM) based classifications. The results suggest the possible integration of an adaptive fuzzy class mean vector and multisource data is promising for imprecise class discrimination using a MRF based classification.
Chen, J.; Hoversten, G.M.
2011-09-15
Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy to derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.
NASA Astrophysics Data System (ADS)
Volchenkov, Dima; Dawin, Jean René
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
Abstraction Augmented Markov Models.
Caragea, Cornelia; Silvescu, Adrian; Caragea, Doina; Honavar, Vasant
2010-12-13
High accuracy sequence classification often requires the use of higher order Markov models (MMs). However, the number of MM parameters increases exponentially with the range of direct dependencies between sequence elements, thereby increasing the risk of overfitting when the data set is limited in size. We present abstraction augmented Markov models (AAMMs) that effectively reduce the number of numeric parameters of k(th) order MMs by successively grouping strings of length k (i.e., k-grams) into abstraction hierarchies. We evaluate AAMMs on three protein subcellular localization prediction tasks. The results of our experiments show that abstraction makes it possible to construct predictive models that use significantly smaller number of features (by one to three orders of magnitude) as compared to MMs. AAMMs are competitive with and, in some cases, significantly outperform MMs. Moreover, the results show that AAMMs often perform significantly better than variable order Markov models, such as decomposed context tree weighting, prediction by partial match, and probabilistic suffix trees.
Local dependence in random graph models: characterization, properties and statistical inference.
Schweinberger, Michael; Handcock, Mark S
2015-06-01
Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with 'ground truth'.
Local dependence in random graph models: characterization, properties and statistical inference
Schweinberger, Michael; Handcock, Mark S.
2015-01-01
Summary Dependent phenomena, such as relational, spatial and temporal phenomena, tend to be characterized by local dependence in the sense that units which are close in a well-defined sense are dependent. In contrast with spatial and temporal phenomena, though, relational phenomena tend to lack a natural neighbourhood structure in the sense that it is unknown which units are close and thus dependent. Owing to the challenge of characterizing local dependence and constructing random graph models with local dependence, many conventional exponential family random graph models induce strong dependence and are not amenable to statistical inference. We take first steps to characterize local dependence in random graph models, inspired by the notion of finite neighbourhoods in spatial statistics and M-dependence in time series, and we show that local dependence endows random graph models with desirable properties which make them amenable to statistical inference. We show that random graph models with local dependence satisfy a natural domain consistency condition which every model should satisfy, but conventional exponential family random graph models do not satisfy. In addition, we establish a central limit theorem for random graph models with local dependence, which suggests that random graph models with local dependence are amenable to statistical inference. We discuss how random graph models with local dependence can be constructed by exploiting either observed or unobserved neighbourhood structure. In the absence of observed neighbourhood structure, we take a Bayesian view and express the uncertainty about the neighbourhood structure by specifying a prior on a set of suitable neighbourhood structures. We present simulation results and applications to two real world networks with ‘ground truth’. PMID:26560142
NASA Astrophysics Data System (ADS)
Takahashi, Tsutomu; Sato, Haruo; Nishimura, Takeshi
2008-05-01
Direct waves of microearthquakes in the high-frequency range (>1 Hz) strongly reflect the random inhomogeneities near their ray paths. This study conducts numerical simulations of envelope broadening of impulsively radiated wavelet assuming spatially non-uniform distribution of random inhomogeneities. We assume plural von Kármán type power spectral density functions (PSDF) for random inhomogeneity to clarify how the non-uniformly distributed random media affect the frequency dependence of envelope broadening. We employ the stochastic ray path method based on the Markov approximation for the mutual coherence function. This method is appropriate to simulate multiple forward scattering during the wave propagation. We mainly examine the travel distance and frequency dependence of the peak delay time in relation to the parameters characterizing the PSDFs. The peak delay time, which is defined as the time lag from the direct-wave onset to the maximum amplitude arrival of its envelope, is the best parameter reflecting the accumulated scattering effect in random media and is quite insensitive to the intrinsic attenuation. According to the numerical simulations in various non-uniform random media, we find some remarkable features in travel distance and frequency dependence, which cannot be found in uniform random media. For example, the frequency dependence in uniform random media is uniquely determined by the spectral gradient of PSDF for arbitrary travel distance; however, that in non-uniform media gradually changes as travel distance increases if the waves have experienced a change of spectral gradient in PSDF. Considering the results of our simulation, we propose a simple recursive formula to calculate the peak delay time in non-uniform random media. This recursive formula can predict the simulation results appropriately and relate the peak delay times to two parameters quantifying the von Kármán type PSDF in short wavelengths. It will become a mathematical base for
Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; Lin, W. P.; Jing, Y. P.
2014-10-10
Simulating the evolution of the local universe is important for studying galaxies and the intergalactic medium in a way free of cosmic variance. Here we present a method to reconstruct the initial linear density field from an input nonlinear density field, employing the Hamiltonian Markov Chain Monte Carlo (HMC) algorithm combined with Particle-mesh (PM) dynamics. The HMC+PM method is applied to cosmological simulations, and the reconstructed linear density fields are then evolved to the present day with N-body simulations. These constrained simulations accurately reproduce both the amplitudes and phases of the input simulations at various z. Using a PM model with a grid cell size of 0.75 h {sup –1} Mpc and 40 time steps in the HMC can recover more than half of the phase information down to a scale k ∼ 0.85 h Mpc{sup –1} at high z and to k ∼ 3.4 h Mpc{sup –1} at z = 0, which represents a significant improvement over similar reconstruction models in the literature, and indicates that our model can reconstruct the formation histories of cosmic structures over a large dynamical range. Adopting PM models with higher spatial and temporal resolutions yields even better reconstructions, suggesting that our method is limited more by the availability of computer resource than by principle. Dynamic models of structure evolution adopted in many earlier investigations can induce non-Gaussianity in the reconstructed linear density field, which in turn can cause large systematic deviations in the predicted halo mass function. Such deviations are greatly reduced or absent in our reconstruction.
Bloomquist, Erica V; Ajkay, Nicolas; Patil, Sujata; Collett, Abigail E; Frazier, Thomas G; Barrio, Andrea V
2016-01-01
Radioactive seed localization (RSL) has emerged as an alternative to wire localization (WL) in patients with nonpalpable breast cancer. Few studies have prospectively evaluated patient satisfaction and outcomes with RSL. We report the results of a randomized trial comparing RSL to WL in our community hospital. We prospectively enrolled 135 patients with nonpalpable breast cancer between 2011 and 2014. Patients were randomized to RSL or WL. Patients rated the pain and the convenience of the localization on a 5-point Likert scale. Characteristics and outcomes were compared between groups. Of 135 patients enrolled, 10 were excluded (benign pathology, palpable cancer, mastectomy, and previous ipsilateral cancer) resulting in 125 patients. Seventy patients (56%) were randomized to RSL and 55 (44%) to WL. Fewer patients in the RSL group reported moderate to severe pain during the localization procedure compared to the WL group (12% versus 26%, respectively, p = 0.058). The overall convenience of the procedure was rated as very good to excellent in 85% of RSL patients compared to 44% of WL patients (p < 0.0001). There was no difference between the volume of the main specimen (p = 0.67), volume of the first surgery (p = 0.67), or rate of positive margins (p = 0.53) between groups. RSL resulted in less severe pain and higher convenience compared to WL, with comparable excision volume and positive margin rates. High patient satisfaction with RSL provides another incentive for surgeons to strongly consider RSL as an alternative to WL.
Localization in band random matrix models with and without increasing diagonal elements.
Wang, Wen-ge
2002-06-01
It is shown that localization of eigenfunctions in the Wigner band random matrix model with increasing diagonal elements can be related to localization in a band random matrix model with random diagonal elements. The relation is obtained by making use of a result of a generalization of Brillouin-Wigner perturbation theory, which shows that reduced Hamiltonian matrices with relatively small dimensions can be introduced for nonperturbative parts of eigenfunctions, and by employing intermediate basis states, which can improve the method of the reduced Hamiltonian matrix. The latter model deviates from the standard band random matrix model mainly in two aspects: (i) the root mean square of diagonal elements is larger than that of off-diagonal elements within the band, and (ii) statistical distributions of the matrix elements are close to the Lévy distribution in their central parts, except in the high top regions.
Toppin, Patrick J; Reid, Marvin; Plummer, Joseph M; Roberts, Patrick O; Harding-Goldson, Hyacinth; McFarlane, Michael E
2017-01-01
Background Conscious sedation is regularly used in ambulatory surgery to improve patient outcomes, in particular patient satisfaction. Reports suggest that the addition of conscious sedation to local anesthesia for inguinal hernioplasty is safe and effective in improving patient satisfaction. No previous randomized controlled trial has assessed the benefit of conscious sedation in this regard. Objective To determine whether the addition of conscious sedation to local anesthesia improves patient satisfaction with inguinal hernioplasty. Methods This trial is designed as a single-center, randomized, placebo-controlled, blinded trial of 148 patients. Adult patients diagnosed with a reducible, unilateral inguinal hernia eligible for hernioplasty using local anesthesia will be recruited. The intervention will be the use of intravenous midazolam for conscious sedation. Normal saline will be used as placebo in the control group. The primary outcome will be patient satisfaction, measured using the validated Iowa Satisfaction with Anesthesia Scale. Secondary outcomes will include intra- and postoperative pain, operative time, volumes of sedative agent and local anesthetic used, time to discharge, early and late complications, and postoperative functional status. Results To date, 171 patients have been recruited. Surgery has been performed on 149 patients, meeting the sample size requirements. Follow-up assessments are still ongoing. Trial completion is expected in August 2017. Conclusions This randomized controlled trial is the first to assess the effectiveness of conscious sedation in improving patient satisfaction with inguinal hernioplasty using local anesthesia. If the results demonstrate improved patient satisfaction with conscious sedation, this would support routine incorporation of conscious sedation in local inguinal hernioplasty and potentially influence national and international hernia surgery guidelines. Trial registration Clinicaltrials.gov NCT02444260; https
Markov Tracking for Agent Coordination
NASA Technical Reports Server (NTRS)
Washington, Richard; Lau, Sonie (Technical Monitor)
1998-01-01
Partially observable Markov decision processes (POMDPs) axe an attractive representation for representing agent behavior, since they capture uncertainty in both the agent's state and its actions. However, finding an optimal policy for POMDPs in general is computationally difficult. In this paper we present Markov Tracking, a restricted problem of coordinating actions with an agent or process represented as a POMDP Because the actions coordinate with the agent rather than influence its behavior, the optimal solution to this problem can be computed locally and quickly. We also demonstrate the use of the technique on sequential POMDPs, which can be used to model a behavior that follows a linear, acyclic trajectory through a series of states. By imposing a "windowing" restriction that restricts the number of possible alternatives considered at any moment to a fixed size, a coordinating action can be calculated in constant time, making this amenable to coordination with complex agents.
Chen, Yi; Jakeman, John; Gittelson, Claude; Xiu, Dongbin
2015-01-08
In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained from the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.
Many-body localization in Ising models with random long-range interactions
NASA Astrophysics Data System (ADS)
Li, Haoyuan; Wang, Jia; Liu, Xia-Ji; Hu, Hui
2016-12-01
We theoretically investigate the many-body localization phase transition in a one-dimensional Ising spin chain with random long-range spin-spin interactions, Vi j∝|i-j |-α , where the exponent of the interaction range α can be tuned from zero to infinitely large. By using exact diagonalization, we calculate the half-chain entanglement entropy and the energy spectral statistics and use them to characterize the phase transition towards the many-body localization phase at infinite temperature and at sufficiently large disorder strength. We perform finite-size scaling to extract the critical disorder strength and the critical exponent of the divergent localization length. With increasing α , the critical exponent experiences a sharp increase at about αc≃1.2 and then gradually decreases to a value found earlier in a disordered short-ranged interacting spin chain. For α <αc , we find that the system is mostly localized and the increase in the disorder strength may drive a transition between two many-body localized phases. In contrast, for α >αc , the transition is from a thermalized phase to the many-body localization phase. Our predictions could be experimentally tested with an ion-trap quantum emulator with programmable random long-range interactions, or with randomly distributed Rydberg atoms or polar molecules in lattices.
Handling target obscuration through Markov chain observations
NASA Astrophysics Data System (ADS)
Kouritzin, Michael A.; Wu, Biao
2008-04-01
Target Obscuration, including foliage or building obscuration of ground targets and landscape or horizon obscuration of airborne targets, plagues many real world filtering problems. In particular, ground moving target identification Doppler radar, mounted on a surveillance aircraft or unattended airborne vehicle, is used to detect motion consistent with targets of interest. However, these targets try to obscure themselves (at least partially) by, for example, traveling along the edge of a forest or around buildings. This has the effect of creating random blockages in the Doppler radar image that move dynamically and somewhat randomly through this image. Herein, we address tracking problems with target obscuration by building memory into the observations, eschewing the usual corrupted, distorted partial measurement assumptions of filtering in favor of dynamic Markov chain assumptions. In particular, we assume the observations are a Markov chain whose transition probabilities depend upon the signal. The state of the observation Markov chain attempts to depict the current obscuration and the Markov chain dynamics are used to handle the evolution of the partially obscured radar image. Modifications of the classical filtering equations that allow observation memory (in the form of a Markov chain) are given. We use particle filters to estimate the position of the moving targets. Moreover, positive proof-of-concept simulations are included.
Algorithms for Discovery of Multiple Markov Boundaries
Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.
2013-01-01
Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052
Bloomquist, Erica V.; Ajkay, Nicolas; Patil, Sujata; Collett, Abigail E.; Frazier, Thomas G.; Barrio, Andrea V.
2015-01-01
Background Radioactive seed localization (RSL) has emerged as an alternative to wire localization (WL) in patients with non-palpable breast cancer. Few studies have prospectively evaluated patient satisfaction and outcomes with RSL. We report the results of a randomized trial comparing RSL to WL in our community hospital. Materials and Methods We prospectively enrolled 135 patients with non-palpable breast cancer between 2011 and 2014. Patients were randomized to RSL or WL. Patients rated the pain and the convenience of the localization on a 5-point Likert scale. Characteristics and outcomes were compared between groups. Results Of 135 patients enrolled, 10 were excluded (benign pathology, palpable cancer, mastectomy and previous ipsilateral cancer) resulting in 125 patients. Seventy patients (56%) were randomized to RSL and 55 (44%) to WL. Fewer patients in the RSL group reported moderate to severe pain during the localization procedure compared to the WL group (12% versus 26%, respectively, p=0.058). The overall convenience of the procedure was rated as very good to excellent in 85% of RSL patients compared to 44% of WL patients (p<0.0001). There was no difference between the volume of the main specimen (p=0.67), volume of the first surgery (p=0.67), or rate of positive margins (p=0.53) between groups. Conclusions RSL resulted in less severe pain and higher convenience compared to WL, with comparable excision volume and positive margin rates. High patient satisfaction with RSL provides another incentive for surgeons to strongly consider RSL as an alternative to WL. PMID:26696461
Eigenvalue Outliers of Non-Hermitian Random Matrices with a Local Tree Structure
NASA Astrophysics Data System (ADS)
Neri, Izaak; Metz, Fernando Lucas
2016-11-01
Spectra of sparse non-Hermitian random matrices determine the dynamics of complex processes on graphs. Eigenvalue outliers in the spectrum are of particular interest, since they determine the stationary state and the stability of dynamical processes. We present a general and exact theory for the eigenvalue outliers of random matrices with a local tree structure. For adjacency and Laplacian matrices of oriented random graphs, we derive analytical expressions for the eigenvalue outliers, the first moments of the distribution of eigenvector elements associated with an outlier, the support of the spectral density, and the spectral gap. We show that these spectral observables obey universal expressions, which hold for a broad class of oriented random matrices.
Hidden Markov model using Dirichlet process for de-identification.
Chen, Tao; Cullen, Richard M; Godwin, Marshall
2015-12-01
For the 2014 i2b2/UTHealth de-identification challenge, we introduced a new non-parametric Bayesian hidden Markov model using a Dirichlet process (HMM-DP). The model intends to reduce task-specific feature engineering and to generalize well to new data. In the challenge we developed a variational method to learn the model and an efficient approximation algorithm for prediction. To accommodate out-of-vocabulary words, we designed a number of feature functions to model such words. The results show the model is capable of understanding local context cues to make correct predictions without manual feature engineering and performs as accurately as state-of-the-art conditional random field models in a number of categories. To incorporate long-range and cross-document context cues, we developed a skip-chain conditional random field model to align the results produced by HMM-DP, which further improved the performance.
Surface-plasmon mode on a random rough metal surface: Enhanced backscattering and localization
NASA Astrophysics Data System (ADS)
Ogura, H.; Wang, Z. L.
1996-04-01
The scattering of light by a silver film with a random rough surface and the excitation of surface-plasmon modes at the metal surface are studied by means of the stochastic functional approach, assuming that the random surface is a homogeneous Gaussian random field. The stochastic wave fields are represented in terms of the Wiener-Hermite orthogonal functionals, and the approximate solutions are obtained for the Wiener kernels. For the attenuated total reflection configuration considered in the paper, the angular distributions of incoherent scattering into both crystal and air are numerically calculated by using first- and second-order Wiener kernels for various combinations of the parameters. In the angular distributions of incoherent scattering into crystal, strong peaks can be observed corresponding to the excitation of forward- and backward-traveling plasmon modes, which are mainly described by the first-order Wiener kernel, and an enhanced scattering peak appears in the backward direction. In the angular distributions of incoherent scattering into air, an enhanced scattering peak also appears in a certain direction, related to the incident angle on the crystal side. The random wave fields at the resonant scattering on the surface of a random rough grating are also numerically calculated from the higher Wiener kernels with an iterative procedure. Localized modes can be clearly observed in the spatial distribution of the random wave fields. The enhanced scattering comes from the second-order Wiener kernel that describes the ``double-scattering'' processes of the ``dressed'' plasmon modes, and is due to the interference of the two double-scattering processes in the reciprocal directions, where the strongly excited plasmon modes take part in the intermediate scattering processes, while the wave localization is a result of ``multiple'' scattering of strongly excited dressed plasmon waves traveling in the ``random media'' created by the surface roughness.
Tarasov, Yu.V. Shostenko, L.D.
2015-05-15
A unified theory for the conductance of an infinitely long multimode quantum wire whose finite segment has randomly rough lateral boundaries is developed. It enables one to rigorously take account of all feasible mechanisms of wave scattering, both related to boundary roughness and to contacts between the wire rough section and the perfect leads within the same technical frameworks. The rough part of the conducting wire is shown to act as a mode-specific randomly modulated effective potential barrier whose height is governed essentially by the asperity slope. The mean height of the barrier, which is proportional to the average slope squared, specifies the number of conducting channels. Under relatively small asperity amplitude this number can take on arbitrary small, up to zero, values if the asperities are sufficiently sharp. The consecutive channel cut-off that arises when the asperity sharpness increases can be regarded as a kind of localization, which is not related to the disorder per se but rather is of entropic or (equivalently) geometric origin. The fluctuating part of the effective barrier results in two fundamentally different types of guided wave scattering, viz., inter- and intramode scattering. The intermode scattering is shown to be for the most part very strong except in the cases of (a) extremely smooth asperities, (b) excessively small length of the corrugated segment, and (c) the asperities sharp enough for only one conducting channel to remain in the wire. Under strong intermode scattering, a new set of conducting channels develops in the corrugated waveguide, which have the form of asymptotically decoupled extended modes subject to individual solely intramode random potentials. In view of this fact, two transport regimes only are realizable in randomly corrugated multimode waveguides, specifically, the ballistic and the localized regime, the latter characteristic of one-dimensional random systems. Two kinds of localization are thus shown to
NASA Astrophysics Data System (ADS)
Monthus, Cécile
2016-09-01
For random Lévy matrices of size N× N , where matrix elements are drawn with some heavy-tailed distribution P≤ft({{H}ij}\\right)\\propto {{N} -1}|{{H}ij}{{|}-1-μ} with 0<μ <2 (infinite variance), there exists an extensive number of finite eigenvalues E = O(1), while the maximal eigenvalue grows as {{E}\\text{max}}∼ {{N}\\frac{1μ}} . Here we study the localization properties of the corresponding eigenvectors via some strong disorder perturbative expansion that remains consistent within the localized phase and that yields their inverse participation ratios (IPR) Y q as a function of the continuous parameter 0. In the region 0<μ <1 , we find that all eigenvectors are localized but display some multifractality: the IPR are finite above some threshold q > q c but diverge in the region 0 < q < q c near the origin. In the region 1<μ <2 , only the sub-extensive fraction {{N}\\frac{3{2+μ}}} of the biggest eigenvalues corresponding to the region |E|≥slant {{N}\\frac{(μ -1)μ (2+μ )}}} remains localized, while the extensive number of other states of smaller energy are delocalized. For the extensive number of finite eigenvalues E = O(1), the localization/delocalization transition thus takes place at the critical value {{μ\\text{c}}=1 corresponding to Cauchy matrices: the IPR Y q of the corresponding critical eigenstates follow the strong-multifractality spectrum characterized by the generalized fractal dimensions {{D}\\text{criti}}(q)=\\frac{1-2q}{1-q}θ ≤ft(0≤slant q≤slant \\frac{1}{2}\\right) , which has been found previously in various other Localization problems in spaces of effective infinite dimensionality.
Local search methods based on variable focusing for random K -satisfiability
NASA Astrophysics Data System (ADS)
Lemoy, Rémi; Alava, Mikko; Aurell, Erik
2015-01-01
We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed.
Absence of localization in a model with correlation measure as a random lattice
NASA Astrophysics Data System (ADS)
Kroon, Lars; Riklund, Rolf
2004-03-01
A coherent picture of localization in one-dimensional aperiodically ordered systems is still missing. We show the presence of purely singular continuous spectrum for a discrete system whose modulation sequence has a correlation measure which is absolutely continuous, such as for a random sequence. The system showing these properties is modeled by the Rudin-Shapiro sequence, whose correlation measure even has a uniform density. The absence of localization is also supported by a numerical investigation of the dynamics of electronic wave packets showing weakly anomalous diffusion and an extremely slow algebraic decay of the temporal autocorrelation function.
Wavelet-based SAR images despeckling using joint hidden Markov model
NASA Astrophysics Data System (ADS)
Li, Qiaoliang; Wang, Guoyou; Liu, Jianguo; Chen, Shaobo
2007-11-01
In the past few years, wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the deficiency for taking account of intrascale correlations that exist among neighboring wavelet coefficients. In this paper, we propose to develop a joint hidden Markov model by fusing the wavelet Bayesian denoising technique with an image regularization procedure based on HMT and Markov random field (MRF). The Expectation Maximization algorithm is used to estimate hyperparameters and specify the mixture model. The noise-free wavelet coefficients are finally estimated by a shrinkage function based on local weighted averaging of the Bayesian estimator. It is shown that the joint method outperforms lee filter and standard HMT techniques in terms of the integrative measure of the equivalent number of looks (ENL) and Pratt's figure of merit(FOM), especially when dealing with speckle noise in large variance.
NASA Astrophysics Data System (ADS)
Nobi, Ashadun; Maeng, Seong Eun; Ha, Gyeong Gyun; Lee, Jae Woo
2013-02-01
We analyzed cross-correlations between price fluctuations of global financial indices (20 daily stock indices over the world) and local indices (daily indices of 200 companies in the Korean stock market) by using random matrix theory (RMT). We compared eigenvalues and components of the largest and the second largest eigenvectors of the cross-correlation matrix before, during, and after the global financial the crisis in the year 2008. We find that the majority of its eigenvalues fall within the RMT bounds [ λ -, λ +], where λ - and λ + are the lower and the upper bounds of the eigenvalues of random correlation matrices. The components of the eigenvectors for the largest positive eigenvalues indicate the identical financial market mode dominating the global and local indices. On the other hand, the components of the eigenvector corresponding to the second largest eigenvalue are positive and negative values alternatively. The components before the crisis change sign during the crisis, and those during the crisis change sign after the crisis. The largest inverse participation ratio (IPR) corresponding to the smallest eigenvector is higher after the crisis than during any other periods in the global and local indices. During the global financial the crisis, the correlations among the global indices and among the local stock indices are perturbed significantly. However, the correlations between indices quickly recover the trends before the crisis.
NASA Astrophysics Data System (ADS)
Hatano, Naomichi; Feinberg, Joshua
2016-12-01
We study Chebyshev-polynomial expansion of the inverse localization length of Hermitian and non-Hermitian random chains as a function of energy. For Hermitian models, the expansion produces this energy-dependent function numerically in one run of the algorithm. This is in strong contrast to the standard transfer-matrix method, which produces the inverse localization length for a fixed energy in each run. For non-Hermitian models, as in the transfer-matrix method, our algorithm computes the inverse localization length for a fixed (complex) energy. We also find a formula of the Chebyshev-polynomial expansion of the density of states of non-Hermitian models. As explained in detail, our algorithm for non-Hermitian models may be the only available efficient algorithm for finding the density of states of models with interactions.
Li, Zhan-Chao; Lai, Yan-Hua; Chen, Li-Li; Chen, Chao; Xie, Yun; Dai, Zong; Zou, Xiao-Yong
2013-04-05
In the post-genome era, one of the most important and challenging tasks is to identify the subcellular localizations of protein complexes, and further elucidate their functions in human health with applications to understand disease mechanisms, diagnosis and therapy. Although various experimental approaches have been developed and employed to identify the subcellular localizations of protein complexes, the laboratory technologies fall far behind the rapid accumulation of protein complexes. Therefore, it is highly desirable to develop a computational method to rapidly and reliably identify the subcellular localizations of protein complexes. In this study, a novel method is proposed for predicting subcellular localizations of mammalian protein complexes based on graph theory with a random forest algorithm. Protein complexes are modeled as weighted graphs containing nodes and edges, where nodes represent proteins, edges represent protein-protein interactions and weights are descriptors of protein primary structures. Some topological structure features are proposed and adopted to characterize protein complexes based on graph theory. Random forest is employed to construct a model and predict subcellular localizations of protein complexes. Accuracies on a training set by a 10-fold cross-validation test for predicting plasma membrane/membrane attached, cytoplasm and nucleus are 84.78%, 71.30%, and 82.00%, respectively. And accuracies for the independent test set are 81.31%, 69.95% and 81.00%, respectively. These high prediction accuracies exhibit the state-of-the-art performance of the current method. It is anticipated that the proposed method may become a useful high-throughput tool and plays a complementary role to the existing experimental techniques in identifying subcellular localizations of mammalian protein complexes. The source code of Matlab and the dataset can be obtained freely on request from the authors.
Cai, Xianfa; Wei, Jia; Wen, Guihua; Yu, Zhiwen
2014-03-01
Precise cancer classification is essential to the successful diagnosis and treatment of cancers. Although semisupervised dimensionality reduction approaches perform very well on clean datasets, the topology of the neighborhood constructed with most existing approaches is unstable in the presence of high-dimensional data with noise. In order to solve this problem, a novel local and global preserving semisupervised dimensionality reduction based on random subspace algorithm marked as RSLGSSDR, which utilizes random subspace for semisupervised dimensionality reduction, is proposed. The algorithm first designs multiple diverse graphs on different random subspace of datasets and then fuses these graphs into a mixture graph on which dimensionality reduction is performed. As themixture graph is constructed in lower dimensionality, it can ease the issues on graph construction on highdimensional samples such that it can hold complicated geometric distribution of datasets as the diversity of random subspaces. Experimental results on public gene expression datasets demonstrate that the proposed RSLGSSDR not only has superior recognition performance to competitive methods, but also is robust against a wide range of values of input parameters.
Localization of nonlinear shallow water waves over a randomly rough seabed
NASA Astrophysics Data System (ADS)
Mei, Chiang; Grataloup, Geraldine; Li, Yile
2003-11-01
Localization or spatial attenuation of sea waves can be caused by bottom friction and by radiation of scattered waves. We describe a theory of shallow-water waves scattered by a long stretch of randomly rough seabed, where the root-mean-square height of the roughness is moderately small. Boussinesq equations are used as the starting point. By using two-scale expansions and Green's functions, multiple scattering by the rough bottom and the nonlinear exchanges of energy between different frequencies are accounted for. For monochromatic incident waves, the evolution equations for all harmonics are shown to be nonlinearly coupled ordinary differential equations with damping, whose coefficients are related to the correlation functions of the roughess. Examples of localization and generation of harmonics are shown by numerical examples. For an incident soliton, the evolution equation is shown to be a KdV-Burgers equation with new diffusion and dispersion terms in integral form, implying memory. Numerical results on soliton deformation, fission and localization will be discussed. Long fetch approximation will be described. This theory differs from several existing ones where random potentials are added to the evolution equations.
Localized buckling of a microtubule surrounded by randomly distributed cross linkers.
Jin, M Z; Ru, C Q
2013-07-01
Microtubules supported by surrounding cross linkers in eukaryotic cells can bear a much higher compressive force than free-standing microtubules. Different from some previous studies, which treated the surroundings as a continuum elastic foundation or elastic medium, the present paper develops a micromechanics numerical model to examine the role of randomly distributed discrete cross linkers in the buckling of compressed microtubules. First, the proposed numerical approach is validated by reproducing the uniform multiwave buckling mode predicted by the existing elastic-foundation model. For more realistic buckling of microtubules surrounded by randomly distributed cross linkers, the present numerical model predicts that the buckling mode is localized at one end in agreement with some known experimental observations. In particular, the critical force for localized buckling, predicted by the present model, is insensitive to microtubule length and can be about 1 order of magnitude lower than those given by the elastic-foundation model, which suggests that the elastic-foundation model may have overestimated the critical force for buckling of microtubules in vivo. In addition, unlike the elastic-foundation model, the present model can capture the effect of end conditions on the critical force and wavelength of localized buckling. Based on the known data of spacing and elastic constants of cross linkers available in literature, the critical force and wavelength of the localized buckling mode, predicted by the present model, are compared to some experimental data with reasonable agreement. Finally, two empirical formulas are proposed for the critical force and wavelength of the localized buckling of microtubules surrounded by cross linkers.
Dynamical Localization for Discrete and Continuous Random Schrödinger Operators
NASA Astrophysics Data System (ADS)
Germinet, F.; De Bièvre, S.
We show for a large class of random Schrödinger operators Ho on and on that dynamical localization holds, i.e. that, with probability one, for a suitable energy interval I and for q a positive real,
Critical Casimir force in the presence of random local adsorption preference.
Parisen Toldin, Francesco
2015-03-01
We study the critical Casimir force for a film geometry in the Ising universality class. We employ a homogeneous adsorption preference on one of the confining surfaces, while the opposing surface exhibits quenched random disorder, leading to a random local adsorption preference. Disorder is characterized by a parameter p, which measures, on average, the portion of the surface that prefers one component, so that p=0,1 correspond to homogeneous adsorption preference. By means of Monte Carlo simulations of an improved Hamiltonian and finite-size scaling analysis, we determine the critical Casimir force. We show that by tuning the disorder parameter p, the system exhibits a crossover between an attractive and a repulsive force. At p=1/2, disorder allows to effectively realize Dirichlet boundary conditions, which are generically not accessible in classical fluids. Our results are relevant for the experimental realizations of the critical Casimir force in binary liquid mixtures.
Chirp- and random-based coded ultrasonic excitation for localized blood-brain barrier opening.
Kamimura, H A S; Wang, S; Wu, S-Y; Karakatsani, M E; Acosta, C; Carneiro, A A O; Konofagou, E E
2015-10-07
Chirp- and random-based coded excitation methods have been proposed to reduce standing wave formation and improve focusing of transcranial ultrasound. However, no clear evidence has been shown to support the benefits of these ultrasonic excitation sequences in vivo. This study evaluates the chirp and periodic selection of random frequency (PSRF) coded-excitation methods for opening the blood-brain barrier (BBB) in mice. Three groups of mice (n = 15) were injected with polydisperse microbubbles and sonicated in the caudate putamen using the chirp/PSRF coded (bandwidth: 1.5–1.9 MHz, peak negative pressure: 0.52 MPa, duration: 30 s) or standard ultrasound (frequency: 1.5 MHz, pressure: 0.52 MPa, burst duration: 20 ms, duration: 5 min) sequences. T1-weighted contrast-enhanced MRI scans were performed to quantitatively analyze focused ultrasound induced BBB opening. The mean opening volumes evaluated from the MRI were mm3, mm3and mm3 for the chirp, random and regular sonications, respectively. The mean cavitation levels were V.s, V.s and V.s for the chirp, random and regular sonications, respectively. The chirp and PSRF coded pulsing sequences improved the BBB opening localization by inducing lower cavitation levels and smaller opening volumes compared to results of the regular sonication technique. Larger bandwidths were associated with more focused targeting but were limited by the frequency response of the transducer, the skull attenuation and the microbubbles optimal frequency range. The coded methods could therefore facilitate highly localized drug delivery as well as benefit other transcranial ultrasound techniques that use higher pressure levels and higher precision to induce the necessary bioeffects in a brain region while avoiding damage to the surrounding healthy tissue.
NASA Astrophysics Data System (ADS)
Yuan, Xin; Shao, Shuai; Stanley, H. Eugene; Havlin, Shlomo
2015-09-01
The stability of networks is greatly influenced by their degree distributions and in particular by their breadth. Networks with broader degree distributions are usually more robust to random failures but less robust to localized attacks. To better understand the effect of the breadth of the degree distribution we study two models in which the breadth is controlled and compare their robustness against localized attacks (LA) and random attacks (RA). We study analytically and by numerical simulations the cases where the degrees in the networks follow a bi-Poisson distribution, P (k ) =α e-λ1λ/1kk ! +(1 -α ) e-λ2λ/2kk ! ,α ∈[0 ,1 ] , and a Gaussian distribution, P (k ) =A exp(-(k/-μ) 22 σ2 ), with a normalization constant A where k ≥0 . In the bi-Poisson distribution the breadth is controlled by the values of α , λ1, and λ2, while in the Gaussian distribution it is controlled by the standard deviation, σ . We find that only when α =0 or α =1 , i.e., degrees obeying a pure Poisson distribution, are LA and RA the same. In all other cases networks are more vulnerable under LA than under RA. For a Gaussian distribution with an average degree μ fixed, we find that when σ2 is smaller than μ the network is more vulnerable against random attack. When σ2 is larger than μ , however, the network becomes more vulnerable against localized attack. Similar qualitative results are also shown for interdependent networks.
Probabilistic pairwise Markov models: application to prostate cancer detection
NASA Astrophysics Data System (ADS)
Monaco, James; Tomaszewski, John E.; Feldman, Michael D.; Moradi, Mehdi; Mousavi, Parvin; Boag, Alexander; Davidson, Chris; Abolmaesumi, Purang; Madabhushi, Anant
2009-02-01
Markov Random Fields (MRFs) provide a tractable means for incorporating contextual information into a Bayesian framework. This contextual information is modeled using multiple local conditional probability density functions (LCPDFs) which the MRF framework implicitly combines into a single joint probability density function (JPDF) that describes the entire system. However, only LCPDFs of certain functional forms are consistent, meaning they reconstitute a valid JPDF. These forms are specified by the Gibbs-Markov equivalence theorem which indicates that the JPDF, and hence the LCPDFs, should be representable as a product of potential functions (i.e. Gibbs distributions). Unfortunately, potential functions are mathematical abstractions that lack intuition; and consequently, constructing LCPDFs through their selection becomes an ad hoc procedure, usually resulting in generic and/or heuristic models. In this paper we demonstrate that under certain conditions the LCDPFs can be formulated in terms of quantities that are both meaningful and descriptive: probability distributions. Using probability distributions instead of potential functions enables us to construct consistent LCPDFs whose modeling capabilities are both more intuitive and expansive than typical MRF models. As an example, we compare the efficacy of our so-called probabilistic pairwise Markov models (PPMMs) to the prevalent Potts model by incorporating both into a novel computer aided diagnosis (CAD) system for detecting prostate cancer in whole-mount histological sections. Using the Potts model the CAD system is able to detection cancerous glands with a specificity of 0.82 and sensitivity of 0.71; its area under the receiver operator characteristic (AUC) curve is 0.83. If instead the PPMM model is employed the sensitivity (specificity is held fixed) and AUC increase to 0.77 and 0.87.
Local random quantum circuits: Ensemble completely positive maps and swap algebras
Zanardi, Paolo
2014-08-15
We define different classes of local random quantum circuits (L-RQC) and show that (a) statistical properties of L-RQC are encoded into an associated family of completely positive maps and (b) average purity dynamics can be described by the action of these maps on operator algebras of permutations (swap algebras). An exactly solvable one-dimensional case is analyzed to illustrate the power of the swap algebra formalism. More in general, we prove short time area-law bounds on average purity for uncorrelated L-RQC and infinite time results for both the uncorrelated and correlated cases.
Local random quantum circuits: Ensemble completely positive maps and swap algebras
NASA Astrophysics Data System (ADS)
Zanardi, Paolo
2014-08-01
We define different classes of local random quantum circuits (L-RQC) and show that (a) statistical properties of L-RQC are encoded into an associated family of completely positive maps and (b) average purity dynamics can be described by the action of these maps on operator algebras of permutations (swap algebras). An exactly solvable one-dimensional case is analyzed to illustrate the power of the swap algebra formalism. More in general, we prove short time area-law bounds on average purity for uncorrelated L-RQC and infinite time results for both the uncorrelated and correlated cases.
In vivo MRI based prostate cancer localization with random forests and auto-context model.
Qian, Chunjun; Wang, Li; Gao, Yaozong; Yousuf, Ambereen; Yang, Xiaoping; Oto, Aytekin; Shen, Dinggang
2016-09-01
Prostate cancer is one of the major causes of cancer death for men. Magnetic resonance (MR) imaging is being increasingly used as an important modality to localize prostate cancer. Therefore, localizing prostate cancer in MRI with automated detection methods has become an active area of research. Many methods have been proposed for this task. However, most of previous methods focused on identifying cancer only in the peripheral zone (PZ), or classifying suspicious cancer ROIs into benign tissue and cancer tissue. Few works have been done on developing a fully automatic method for cancer localization in the entire prostate region, including central gland (CG) and transition zone (TZ). In this paper, we propose a novel learning-based multi-source integration framework to directly localize prostate cancer regions from in vivo MRI. We employ random forests to effectively integrate features from multi-source images together for cancer localization. Here, multi-source images include initially the multi-parametric MRIs (i.e., T2, DWI, and dADC) and later also the iteratively-estimated and refined tissue probability map of prostate cancer. Experimental results on 26 real patient data show that our method can accurately localize cancerous sections. The higher section-based evaluation (SBE), combined with the ROC analysis result of individual patients, shows that the proposed method is promising for in vivo MRI based prostate cancer localization, which can be used for guiding prostate biopsy, targeting the tumor in focal therapy planning, triage and follow-up of patients with active surveillance, as well as the decision making in treatment selection. The common ROC analysis with the AUC value of 0.832 and also the ROI-based ROC analysis with the AUC value of 0.883 both illustrate the effectiveness of our proposed method.
NASA Astrophysics Data System (ADS)
Yan, Zhi-Zhong; Zhang, Chuanzeng; Wang, Yue-Sheng
2011-03-01
The band structures of in-plane elastic waves propagating in two-dimensional phononic crystals with one-dimensional random disorder and aperiodicity are analyzed in this paper. The localization of wave propagation is discussed by introducing the concept of the localization factor, which is calculated by the plane-wave-based transfer-matrix method. By treating the random disorder and aperiodicity as the deviation from the periodicity in a special way, three kinds of aperiodic phononic crystals that have normally distributed random disorder, Thue-Morse and Rudin-Shapiro sequence in one direction and translational symmetry in the other direction are considered and the band structures are characterized using localization factors. Besides, as a special case, we analyze the band gap properties of a periodic planar layered composite containing a periodic array of square inclusions. The transmission coefficients based on eigen-mode matching theory are also calculated and the results show the same behaviors as the localization factor does. In the case of random disorders, the localization degree of the normally distributed random disorder is larger than that of the uniformly distributed random disorder although the eigenstates are both localized no matter what types of random disorders, whereas, for the case of Thue-Morse and Rudin-Shapiro structures, the band structures of Thue-Morse sequence exhibit similarities with the quasi-periodic (Fibonacci) sequence not present in the results of the Rudin-Shapiro sequence.
Markov Chain Monte Carlo Bayesian Learning for Neural Networks
NASA Technical Reports Server (NTRS)
Goodrich, Michael S.
2011-01-01
Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.
Subensemble decomposition and Markov process analysis of Burgers turbulence.
Zhang, Zhi-Xiong; She, Zhen-Su
2011-08-01
A numerical and statistical study is performed to describe the positive and negative local subgrid energy fluxes in the one-dimensional random-force-driven Burgers turbulence (Burgulence). We use a subensemble method to decompose the field into shock wave and rarefaction wave subensembles by group velocity difference. We observe that the shock wave subensemble shows a strong intermittency which dominates the whole Burgulence field, while the rarefaction wave subensemble satisfies the Kolmogorov 1941 (K41) scaling law. We calculate the two subensemble probabilities and find that in the inertial range they maintain scale invariance, which is the important feature of turbulence self-similarity. We reveal that the interconversion of shock and rarefaction waves during the equation's evolution displays in accordance with a Markov process, which has a stationary transition probability matrix with the elements satisfying universal functions and, when the time interval is much greater than the corresponding characteristic value, exhibits the scale-invariant property.
NASA Technical Reports Server (NTRS)
Smith, R. M.
1991-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.
Anisotropy of the monomer random walk in a polymer melt: local-order and connectivity effects
NASA Astrophysics Data System (ADS)
Bernini, S.; Leporini, D.
2016-05-01
The random walk of a bonded monomer in a polymer melt is anisotropic due to local order and bond connectivity. We investigate both effects by molecular-dynamics simulations on melts of fully-flexible linear chains ranging from dimers (M = 2) up to entangled polymers (M = 200). The corresponding atomic liquid is also considered a reference system. To disentangle the influence of the local geometry and the bond arrangements, and to reveal their interplay, we define suitable measures of the anisotropy emphasising either the former or the latter aspect. Connectivity anisotropy, as measured by the correlation between the initial bond orientation and the direction of the subsequent monomer displacement, shows a slight enhancement due to the local order at times shorter than the structural relaxation time. At intermediate times—when the monomer displacement is comparable to the bond length—a pronounced peak and then decays slowly as t -1/2, becoming negligible when the displacement is as large as about five bond lengths, i.e. about four monomer diameters or three Kuhn lengths. Local-geometry anisotropy, as measured by the correlation between the initial orientation of a characteristic axis of the Voronoi cell and the subsequent monomer dynamics, is affected at shorter times than the structural relaxation time by the cage shape with antagonistic disturbance by the connectivity. Differently, at longer times, the connectivity favours the persistence of the local-geometry anisotropy, which vanishes when the monomer displacement exceeds the bond length. Our results strongly suggest that the sole consideration of the local order is not enough to understand the microscopic origin of the rattling amplitude of the trapped monomer in the cage of the neighbours.
Inelastic collapse and near-wall localization of randomly accelerated particles.
Belan, S; Chernykh, A; Lebedev, V; Falkovich, G
2016-05-01
Inelastic collapse of stochastic trajectories of a randomly accelerated particle moving in half-space z>0 has been discovered by McKean [J. Math. Kyoto Univ. 2, 227 (1963)] and then independently rediscovered by Cornell et al. [Phys. Rev. Lett. 81, 1142 (1998)PRLTAO0031-900710.1103/PhysRevLett.81.1142]. The essence of this phenomenon is that the particle arrives at the wall at z=0 with zero velocity after an infinite number of inelastic collisions if the restitution coefficient β of particle velocity is smaller than the critical value β_{c}=exp(-π/sqrt[3]). We demonstrate that inelastic collapse takes place also in a wide class of models with spatially inhomogeneous random forcing and, what is more, that the critical value β_{c} is universal. That class includes an important case of inertial particles in wall-bounded random flows. To establish how inelastic collapse influences the particle distribution, we derive the exact equilibrium probability density function ρ(z,v) for the particle position and velocity. The equilibrium distribution exists only at β<β_{c} and indicates that inelastic collapse does not necessarily imply near-wall localization.
Inelastic collapse and near-wall localization of randomly accelerated particles
NASA Astrophysics Data System (ADS)
Belan, S.; Chernykh, A.; Lebedev, V.; Falkovich, G.
2016-05-01
Inelastic collapse of stochastic trajectories of a randomly accelerated particle moving in half-space z >0 has been discovered by McKean [J. Math. Kyoto Univ. 2, 227 (1963)] and then independently rediscovered by Cornell et al. [Phys. Rev. Lett. 81, 1142 (1998), 10.1103/PhysRevLett.81.1142]. The essence of this phenomenon is that the particle arrives at the wall at z =0 with zero velocity after an infinite number of inelastic collisions if the restitution coefficient β of particle velocity is smaller than the critical value βc=exp(-π /√{3 }) . We demonstrate that inelastic collapse takes place also in a wide class of models with spatially inhomogeneous random forcing and, what is more, that the critical value βc is universal. That class includes an important case of inertial particles in wall-bounded random flows. To establish how inelastic collapse influences the particle distribution, we derive the exact equilibrium probability density function ρ (z ,v ) for the particle position and velocity. The equilibrium distribution exists only at β <βc and indicates that inelastic collapse does not necessarily imply near-wall localization.
Random local binary pattern based label learning for multi-atlas segmentation
NASA Astrophysics Data System (ADS)
Zhu, Hancan; Cheng, Hewei; Fan, Yong
2015-03-01
Multi-atlas segmentation method has attracted increasing attention in the field of medical image segmentation. It segments the target image by combining warped atlas labels according to a label fusion strategy, usually based on the intensity information of the target and atlas images. However, it has been demonstrated that image intensity information itself is not discriminative enough for distinguishing different subcortical structures in brain magnetic resonance (MR) images. Recent advance in multi-atlas based segmentation has witnessed success of label fusion methods built on informative image features. The key component in these methods is the image feature extraction. Conventional image feature extraction methods, such as textural feature extraction, are built on manually designed image filters and their performance varies when applied to different segmentation problems. In this paper, we propose a random local binary pattern (RLBP) method to generate image features in a random fashion. Based on RLBP features, we use a local learning strategy to fuse labels in multi-atlas based segmentation. Our method has been validated for segmenting hippocampus from MR images. The experiment results have demonstrated that our method can achieve competitive segmentation performance as the state-of-the-art methods.
Chirp- and random-based coded ultrasonic excitation for localized blood-brain barrier opening
NASA Astrophysics Data System (ADS)
Kamimura, H. A. S.; Wang, S.; Wu, S.-Y.; Karakatsani, M. E.; Acosta, C.; Carneiro, A. A. O.; Konofagou, E. E.
2015-10-01
Chirp- and random-based coded excitation methods have been proposed to reduce standing wave formation and improve focusing of transcranial ultrasound. However, no clear evidence has been shown to support the benefits of these ultrasonic excitation sequences in vivo. This study evaluates the chirp and periodic selection of random frequency (PSRF) coded-excitation methods for opening the blood-brain barrier (BBB) in mice. Three groups of mice (n = 15) were injected with polydisperse microbubbles and sonicated in the caudate putamen using the chirp/PSRF coded (bandwidth: 1.5-1.9 MHz, peak negative pressure: 0.52 MPa, duration: 30 s) or standard ultrasound (frequency: 1.5 MHz, pressure: 0.52 MPa, burst duration: 20 ms, duration: 5 min) sequences. T1-weighted contrast-enhanced MRI scans were performed to quantitatively analyze focused ultrasound induced BBB opening. The mean opening volumes evaluated from the MRI were 9.38+/- 5.71 mm3, 8.91+/- 3.91 mm3and 35.47+/- 5.10 mm3 for the chirp, random and regular sonications, respectively. The mean cavitation levels were 55.40+/- 28.43 V.s, 63.87+/- 29.97 V.s and 356.52+/- 257.15 V.s for the chirp, random and regular sonications, respectively. The chirp and PSRF coded pulsing sequences improved the BBB opening localization by inducing lower cavitation levels and smaller opening volumes compared to results of the regular sonication technique. Larger bandwidths were associated with more focused targeting but were limited by the frequency response of the transducer, the skull attenuation and the microbubbles optimal frequency range. The coded methods could therefore facilitate highly localized drug delivery as well as benefit other transcranial ultrasound techniques that use higher pressure levels and higher precision to induce the necessary bioeffects in a brain region while avoiding damage to the surrounding healthy tissue.
Chirp- and random-based coded ultrasonic excitation for localized blood-brain barrier opening
Kamimura, HAS; Wang, S; Wu, S-Y; Karakatsani, ME; Acosta, C; Carneiro, AAO; Konofagou, EE
2015-01-01
Chirp- and random-based coded excitation methods have been proposed to reduce standing wave formation and improve focusing of transcranial ultrasound. However, no clear evidence has been shown to support the benefits of these ultrasonic excitation sequences in vivo. This study evaluates the chirp and periodic selection of random frequency (PSRF) coded-excitation methods for opening the blood-brain barrier (BBB) in mice. Three groups of mice (n=15) were injected with polydisperse microbubbles and sonicated in the caudate putamen using the chirp/PSRF coded (bandwidth: 1.5-1.9 MHz, peak negative pressure: 0.52 MPa, duration: 30 s) or standard ultrasound (frequency: 1.5 MHz, pressure: 0.52 MPa, burst duration: 20 ms, duration: 5 min) sequences. T1-weighted contrast-enhanced MRI scans were performed to quantitatively analyze focused ultrasound induced BBB opening. The mean opening volumes evaluated from the MRI were 9.38±5.71 mm3, 8.91±3.91 mm3 and 35.47 ± 5.10 mm3 for the chirp, random and regular sonications, respectively. The mean cavitation levels were 55.40±28.43 V.s, 63.87±29.97 V.s and 356.52±257.15 V.s for the chirp, random and regular sonications, respectively. The chirp and PSRF coded pulsing sequences improved the BBB opening localization by inducing lower cavitation levels and smaller opening volumes compared to results of the regular sonication technique. Larger bandwidths were associated with more focused targeting but were limited by the frequency response of the transducer, the skull attenuation and the microbubbles optimal frequency range. The coded methods could therefore facilitate highly localized drug delivery as well as benefit other transcranial ultrasound techniques that use higher pressure levels and higher precision to induce the necessary bioeffects in a brain region while avoiding damage to the surrounding healthy tissue. PMID:26394091
NASA Astrophysics Data System (ADS)
Zhang, Liangsheng; Zhao, Bo; Devakul, Trithep; Huse, David A.
2016-06-01
We present a simplified strong-randomness renormalization group (RG) that captures some aspects of the many-body localization (MBL) phase transition in generic disordered one-dimensional systems. This RG can be formulated analytically and is mathematically equivalent to a domain coarsening model that has been previously solved. The critical fixed-point distribution and critical exponents (that satisfy the Chayes inequality) are thus obtained analytically or to numerical precision. This reproduces some, but not all, of the qualitative features of the MBL phase transition that are indicated by previous numerical work and approximate RG studies: our RG might serve as a "zeroth-order" approximation for future RG studies. One interesting feature that we highlight is that the rare Griffiths regions are fractal. For thermal Griffiths regions within the MBL phase, this feature might be qualitatively correctly captured by our RG. If this is correct beyond our approximations, then these Griffiths effects are stronger than has been previously assumed.
NASA Astrophysics Data System (ADS)
Gui, Ming; Huang, Ming-Qiu; Liang, Lin-Mei
2016-10-01
In practical continuous-variable quantum key distribution (CVQKD) systems, due to environmental disturbance or some intrinsic imperfections of devices, inevitably the local oscillator (LO) employed in a coherent detection always fluctuates arbitrarily over time, which compromises the security and performance of practical CVQKD systems. In this paper, we investigate the performance of practical CVQKD systems with LO fluctuating randomly. By revising the measurement result of balanced homodyne detection and embedding fluctuation parameters into security analysis, we find that in addition to the average LO intensity, the fluctuation variance also severely affects the secret key rate. No secret key can be obtained if fluctuation variance is relatively large. This indicates that in a practical CVQKD, LO intensity should be well monitored and stabilized. Our research can be directly applied to improve the robustness of a practical CVQKD system as well as be used to optimize CVQKD protocols.
Many-body localization in a long range XXZ model with random-field
NASA Astrophysics Data System (ADS)
Li, Bo
2016-12-01
Many-body localization (MBL) in a long range interaction XXZ model with random field are investigated. Using the exact diagonal method, the MBL phase diagram with different tuning parameters and interaction range is obtained. It is found that the phase diagram of finite size results supplies strong evidence to confirm that the threshold interaction exponent α = 2. The tuning parameter Δ can efficiently change the MBL edge in high energy density stats, thus the system can be controlled to transfer from thermal phase to MBL phase by changing Δ. The energy level statistics data are consistent with result of the MBL phase diagram. However energy level statistics data cannot detect the thermal phase correctly in extreme long range case.
Simulation of Anderson localization in a random fiber using a fast Fresnel diffraction algorithm
NASA Astrophysics Data System (ADS)
Davis, Jeffrey A.; Cottrell, Don M.
2016-06-01
Anderson localization has been previously demonstrated both theoretically and experimentally for transmission of a Gaussian beam through long distances in an optical fiber consisting of a random array of smaller fibers, each having either a higher or lower refractive index. However, the computational times were extremely long. We show how to simulate these results using a fast Fresnel diffraction algorithm. In each iteration of this approach, the light passes through a phase mask, undergoes Fresnel diffraction over a small distance, and then passes through the same phase mask. We also show results where we use a binary amplitude mask at the input that selectively illuminates either the higher or the lower index fibers. Additionally, we examine imaging of various sized objects through these fibers. In all cases, our results are consistent with other computational methods and experimental results, but with a much reduced computational time.
Many-body localization in a quantum simulator with programmable random disorder
NASA Astrophysics Data System (ADS)
Smith, J.; Lee, A.; Richerme, P.; Neyenhuis, B.; Hess, P. W.; Hauke, P.; Heyl, M.; Huse, D. A.; Monroe, C.
2016-10-01
When a system thermalizes it loses all memory of its initial conditions. Even within a closed quantum system, subsystems usually thermalize using the rest of the system as a heat bath. Exceptions to quantum thermalization have been observed, but typically require inherent symmetries or noninteracting particles in the presence of static disorder. However, for strong interactions and high excitation energy there are cases, known as many-body localization (MBL), where disordered quantum systems can fail to thermalize. We experimentally generate MBL states by applying an Ising Hamiltonian with long-range interactions and programmable random disorder to ten spins initialized far from equilibrium. Using experimental and numerical methods we observe the essential signatures of MBL: initial-state memory retention, Poissonian distributed energy level spacings, and evidence of long-time entanglement growth. Our platform can be scaled to more spins, where a detailed modelling of MBL becomes impossible.
Stochastic Dynamics through Hierarchically Embedded Markov Chains
NASA Astrophysics Data System (ADS)
Vasconcelos, Vítor V.; Santos, Fernando P.; Santos, Francisco C.; Pacheco, Jorge M.
2017-02-01
Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects—such as mutations in evolutionary dynamics and a random exploration of choices in social systems—including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.
Stochastic Dynamics through Hierarchically Embedded Markov Chains.
Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M
2017-02-03
Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.
Predicting local Soil- and Land-units with Random Forest in the Senegalese Sahel
NASA Astrophysics Data System (ADS)
Grau, Tobias; Brandt, Martin; Samimi, Cyrus
2013-04-01
MODIS (MCD12Q1) or Globcover are often the only available global land-cover products, however ground-truthing in the Sahel of Senegal has shown that most classes do have any agreement with actual land-cover making those products unusable in any local application. We suggest a methodology, which models local Wolof land- and soil-types in an area in the Senegalese Ferlo around Linguère at different scales. In a first step, interviews with the local population were conducted to ascertain the local denotation of soil units, as well as their agricultural use and woody vegetation mainly growing on them. "Ndjor" are soft sand soils with mainly Combretum glutinosum trees. They are suitable for groundnuts and beans while millet is grown on hard sand soils ("Bardjen") dominated by Balanites aegyptiaca and Acacia tortilis. "Xur" are clayey depressions with a high diversity of tree species. Lateritic pasture sites with dense woody vegetation (mostly Pterocarpus lucens and Guiera senegalensis) have never been used for cropping and are called "All". In a second step, vegetation and soil parameters of 85 plots (~1 ha) were surveyed in the field. 28 different soil parameters are clustered into 4 classes using the WARD algorithm. Here, 81% agree with the local classification. Then, an ordination (NMDS) with 2 dimensions and a stress-value of 9.13% was calculated using the 28 soil parameters. It shows several significant relationships between the soil classes and the fitted environmental parameters which are derived from field data, a digital elevation model, Landsat and RapidEye imagery as well as TRMM rainfall data. Landsat's band 5 reflectance values (1.55 - 1.75 µm) of mean dry season image (2000-2010) has a R² of 0.42 and is the most important of 9 significant variables (5%-level). A random forest classifier is then used to extrapolate the 4 classes to the whole study area based on the 9 significant environmental parameters. At a resolution of 30 m the OBB (out-of-bag) error
Adjuvant chemo- and hormonal therapy in locally advanced breast cancer: a randomized clinical study
Schaake-Koning, C.; van der Linden, E.H.; Hart, G.; Engelsman, E.
1985-10-01
Between 1977 and 1980, 118 breast cancer patients with locally advanced disease, T3B-4, any N, M0 or T1-3, tumor positive axillary apex biopsy, were randomized to one of three arms: I: radiotherapy (RT) to the breast and adjacent lymph node areas; II: RT followed by 12 cycles of cyclophosphamide, methotrexate, 5 fluorouracil (CMF) and tamoxifen during the chemotherapy period; III: 2 cycles of adriamycin and vincristine (AV), alternated with 2 cycles of CMF, then RT, followed by another 4 cycles of AV, alternated with 4 CMF; tamoxifen during the entire treatment period. The median follow-up period was 5 1/2 years. The adjuvant chemo- and hormonal therapy did not improve the overall survival; the 5-year survival was 37% for all three treatment arms. There was no statistically significant difference in RFS between the three modalities, nor when arm I was compared to arm II and III together. LR was not statistically different over the three treatment arms. In 18 of the 24 patients with LR, distant metastases appeared within a few months from the local recurrence. The menopausal status did not influence the treatment results. Dose reduction in more than 4 cycles of chemotherapy was accompanied by better results. In conclusion: adjuvant chemo- and hormonal therapy did not improve RFS and overall survival. These findings do not support the routine use of adjuvant chemo- and endocrine therapy for inoperable breast cancer.
Phase transitions in Hidden Markov Models
NASA Astrophysics Data System (ADS)
Bechhoefer, John; Lathouwers, Emma
In Hidden Markov Models (HMMs), a Markov process is not directly accessible. In the simplest case, a two-state Markov model ``emits'' one of two ``symbols'' at each time step. We can think of these symbols as noisy measurements of the underlying state. With some probability, the symbol implies that the system is in one state when it is actually in the other. The ability to judge which state the system is in sets the efficiency of a Maxwell demon that observes state fluctuations in order to extract heat from a coupled reservoir. The state-inference problem is to infer the underlying state from such noisy measurements at each time step. We show that there can be a phase transition in such measurements: for measurement error rates below a certain threshold, the inferred state always matches the observation. For higher error rates, there can be continuous or discontinuous transitions to situations where keeping a memory of past observations improves the state estimate. We can partly understand this behavior by mapping the HMM onto a 1d random-field Ising model at zero temperature. We also present more recent work that explores a larger parameter space and more states. Research funded by NSERC, Canada.
Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images
Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu
2013-01-01
With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856
Fawzy El-Sayed, Karim M; Dahaba, Moushira A; Aboul-Ela, Shadw; Darhous, Mona S
2012-08-01
Hyaluronic acid application has been proven to be beneficial in a number of medical disciplines. The aim of the current study was to clinically evaluate the effect of local application of hyaluronan gel in conjunction with periodontal surgery. Fourteen patients with chronic periodontitis having four interproximal intrabony defects (≥3 mm) with probing depth values >5 mm were included in this split-mouth study. Following initial nonsurgical periodontal therapy and re-evaluation, defects were randomly assigned to be treated with modified Widman flap (MWF) surgery in conjunction with either 0.8% hyaluronan gel (test) or placebo gel (control) application. Clinical attachment level (CAL), probing depth (PD), gingival recession (GR), plaque index (PI), and bleeding on probing (BOP) values were taken at baseline and 3 and 6 months. Differences between test and control sites were evaluated using a Wilcoxon signed-rank and a McNemar test. A Friedman and a Cochran test were used to test equal ranks over time. Statistically significant differences were noted for CAL and GR (P < 0.05) in favor of the test sites. No significant differences were found regarding PD, BOP, or PI values (P > 0.05). Hyaluronan gel application in conjunction with periodontal surgery appears to result in significant improvement of CAL and in a reduction in GR. Hyaluronan gel application appears to improve the clinical outcome of MWF surgery.
Vaishya, Raju; Wani, Ajaz Majeed; Vijay, Vipul
2015-12-01
Postoperative analgesia following Total Knee Arthroplasty (TKA) with the use of parenteral opioids or epidural analgesia can be associated with important side effects. Good perioperative analgesia facilitates faster rehabilitation, improves patient satisfaction, and may reduce the hospital stay. We investigated the analgesic effect of a locally injected mixture of drugs, in a double blinded RCT in 80 primary TKA. They were randomized either to receive a periarticular mixture of drugs containing bupivacaine, ketorolac, morphine, and adrenalline or to receive normal saline. Visual analog scores (VAS) for pain (at rest and during activity) and for patient satisfaction and range of motion were recorded postoperatively. The patients who had received the periarticular injection used significantly less the Patient Controlled Analgesia (PCA) after the surgery as compared to the control group. In addition, they had lower VAS for pain during rest and activity and higher visual analog scores for patient satisfaction 72 hours postoperatively. No major complication related to the drugs was observed. Intraoperative periarticular injection with multimodal drugs following TKA can significantly reduce the postoperative pain and hence the requirements for PCA and hospital stay, with no apparent risks.
Fast Threshold image segmentation based on 2D Fuzzy Fisher and Random Local Optimized QPSO.
Zhang, Chunming; Xie, Yongchun; Liu, Da; Wang, Li
2016-10-26
In the paper, a real-time segmentation method which separates the target signal from the navigation image is proposed. In the approaching docking stage, the navigation image is composed of target and nontarget signal, which are separately bright spot and space vehicle itself. Since the non-target signals is the main part of the navigation image, the traditional entropy-related criterions and Ostu-related criterions will bring inadequate segmentation, while the mere 2D Fisher criterion will causes over-segmentation, all the methods show their shortages in dealing with this kind of case. To guarantee a precise image segmentation, a revised 2D fuzzy Fisher is proposed in the paper to make a trade-off between positioning target regions and retaining target fuzzy boundaries. Firstly, to reduce redundant computations in finding the threshold pair, a 2D fuzzy Fisher criterion based integral image is established by way of simplifying the corresponding fuzzy domains. And then, to quicken the convergence, a random orthogonal component is added in its quasioptimum particle to enhance its local searching capacity in each iteration. Experimental results show its competence of quick segmentation.
Zhang, Yu; Li, Yan; Shao, Hao; Zhong, Yaozhao; Zhang, Sai; Zhao, Zongxi
2012-06-01
Band structure and wave localization are investigated for sea surface water waves over large-scale sand wave topography. Sand wave height, sand wave width, water depth, and water width between adjacent sand waves have significant impact on band gaps. Random fluctuations of sand wave height, sand wave width, and water depth induce water wave localization. However, random water width produces a perfect transmission tunnel of water waves at a certain frequency so that localization does not occur no matter how large a disorder level is applied. Together with theoretical results, the field experimental observations in the Taiwan Bank suggest band gap and wave localization as the physical mechanism of sea surface water wave propagating over natural large-scale sand waves.
Markov Chain Analysis of Musical Dice Games
NASA Astrophysics Data System (ADS)
Volchenkov, D.; Dawin, J. R.
2012-07-01
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
Markov chain for estimating human mitochondrial DNA mutation pattern
NASA Astrophysics Data System (ADS)
Vantika, Sandy; Pasaribu, Udjianna S.
2015-12-01
The Markov chain was proposed to estimate the human mitochondrial DNA mutation pattern. One DNA sequence was taken randomly from 100 sequences in Genbank. The nucleotide transition matrix and mutation transition matrix were estimated from this sequence. We determined whether the states (mutation/normal) are recurrent or transient. The results showed that both of them are recurrent.
Catton, Charles N; Lukka, Himu; Gu, Chu-Shu; Martin, Jarad M; Supiot, Stéphane; Chung, Peter W M; Bauman, Glenn S; Bahary, Jean-Paul; Ahmed, Shahida; Cheung, Patrick; Tai, Keen Hun; Wu, Jackson S; Parliament, Matthew B; Tsakiridis, Theodoros; Corbett, Tom B; Tang, Colin; Dayes, Ian S; Warde, Padraig; Craig, Tim K; Julian, Jim A; Levine, Mark N
2017-03-15
Purpose Men with localized prostate cancer often are treated with external radiotherapy (RT) over 8 to 9 weeks. Hypofractionated RT is given over a shorter time with larger doses per treatment than standard RT. We hypothesized that hypofractionation versus conventional fractionation is similar in efficacy without increased toxicity. Patients and Methods We conducted a multicenter randomized noninferiority trial in intermediate-risk prostate cancer (T1 to 2a, Gleason score ≤ 6, and prostate-specific antigen [PSA] 10.1 to 20 ng/mL; T2b to 2c, Gleason ≤ 6, and PSA ≤ 20 ng/mL; or T1 to 2, Gleason = 7, and PSA ≤ 20 ng/mL). Patients were allocated to conventional RT of 78 Gy in 39 fractions over 8 weeks or to hypofractionated RT of 60 Gy in 20 fractions over 4 weeks. Androgen deprivation was not permitted with therapy. The primary outcome was biochemical-clinical failure (BCF) defined by any of the following: PSA failure (nadir + 2), hormonal intervention, clinical local or distant failure, or death as a result of prostate cancer. The noninferiority margin was 7.5% (hazard ratio, < 1.32). Results Median follow-up was 6.0 years. One hundred nine of 608 patients in the hypofractionated arm versus 117 of 598 in the standard arm experienced BCF. Most of the events were PSA failures. The 5-year BCF disease-free survival was 85% in both arms (hazard ratio [short v standard], 0.96; 90% CI, 0.77 to 1.2). Ten deaths as a result of prostate cancer occurred in the short arm and 12 in the standard arm. No significant differences were detected between arms for grade ≥ 3 late genitourinary and GI toxicity. Conclusion The hypofractionated RT regimen used in this trial was not inferior to conventional RT and was not associated with increased late toxicity. Hypofractionated RT is more convenient for patients and should be considered for intermediate-risk prostate cancer.
Tumor segmentation on FDG-PET: usefulness of locally connected conditional random fields
NASA Astrophysics Data System (ADS)
Nishio, Mizuho; Kono, Atsushi K.; Koyama, Hisanobu; Nishii, Tatsuya; Sugimura, Kazuro
2015-03-01
This study aimed to develop software for tumor segmentation on 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET). To segment the tumor from the background, we used graph cut, whose segmentation energy was generally divided into two terms: the unary and pairwise terms. Locally connected conditional random fields (LCRF) was proposed for the pairwise term. In LCRF, a three-dimensional cubic window with length L was set for each voxel, and voxels within the window were considered for the pairwise term. To evaluate our method, 64 clinically suspected metastatic bone tumors were tested, which were revealed by FDG-PET. To obtain ground truth, the tumors were manually delineated via consensus of two board-certified radiologists. To compare the LCRF accuracy, other types of segmentation were also applied such as region-growing based on 35%, 40%, and 45% of the tumor maximum standardized uptake value (RG35, RG40, and RG45, respectively), SLIC superpixels (SS), and region-based active contour models (AC). To validate the tumor segmentation accuracy, a dice similarity coefficient (DSC) was calculated between manual segmentation and result of each technique. The DSC difference was tested using the Wilcoxon signed rank test. The mean DSCs of LCRF at L = 3, 5, 7, and 9 were 0.784, 0.801, 0.809, and 0.812, respectively. The mean DSCs of other techniques were RG35, 0.633; RG40, 0.675; RG45, 0.689; SS, 0.709; and AC, 0.758. The DSC differences between LCRF and other techniques were statistically significant (p <0.05). In conclusion, tumor segmentation was more reliably performed with LCRF relative to other techniques.
Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models
NASA Astrophysics Data System (ADS)
Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti
2016-10-01
A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.
Markov and non-Markov processes in complex systems by the dynamical information entropy
NASA Astrophysics Data System (ADS)
Yulmetyev, R. M.; Gafarov, F. M.
1999-12-01
We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.
NASA Astrophysics Data System (ADS)
Faggiani, Rémi; Baron, Alexandre; Zang, Xiaorun; Lalouat, Loïc; Schulz, Sebastian A.; O’Regan, Bryan; Vynck, Kevin; Cluzel, Benoît; de Fornel, Frédérique; Krauss, Thomas F.; Lalanne, Philippe
2016-06-01
Light localization due to random imperfections in periodic media is paramount in photonics research. The group index is known to be a key parameter for localization near photonic band edges, since small group velocities reinforce light interaction with imperfections. Here, we show that the size of the smallest localized mode that is formed at the band edge of a one-dimensional periodic medium is driven instead by the effective photon mass, i.e. the flatness of the dispersion curve. Our theoretical prediction is supported by numerical simulations, which reveal that photonic-crystal waveguides can exhibit surprisingly small localized modes, much smaller than those observed in Bragg stacks thanks to their larger effective photon mass. This possibility is demonstrated experimentally with a photonic-crystal waveguide fabricated without any intentional disorder, for which near-field measurements allow us to distinctly observe a wavelength-scale localized mode despite the smallness (~1/1000 of a wavelength) of the fabrication imperfections.
Harmonic Oscillator Model for Radin's Markov-Chain Experiments
NASA Astrophysics Data System (ADS)
Sheehan, D. P.; Wright, J. H.
2006-10-01
The conscious observer stands as a central figure in the measurement problem of quantum mechanics. Recent experiments by Radin involving linear Markov chains driven by random number generators illuminate the role and temporal dynamics of observers interacting with quantum mechanically labile systems. In this paper a Lagrangian interpretation of these experiments indicates that the evolution of Markov chain probabilities can be modeled as damped harmonic oscillators. The results are best interpreted in terms of symmetric equicausal determinism rather than strict retrocausation, as posited by Radin. Based on the present analysis, suggestions are made for more advanced experiments.
Harmonic Oscillator Model for Radin's Markov-Chain Experiments
Sheehan, D. P.; Wright, J. H.
2006-10-16
The conscious observer stands as a central figure in the measurement problem of quantum mechanics. Recent experiments by Radin involving linear Markov chains driven by random number generators illuminate the role and temporal dynamics of observers interacting with quantum mechanically labile systems. In this paper a Lagrangian interpretation of these experiments indicates that the evolution of Markov chain probabilities can be modeled as damped harmonic oscillators. The results are best interpreted in terms of symmetric equicausal determinism rather than strict retrocausation, as posited by Radin. Based on the present analysis, suggestions are made for more advanced experiments.
Metrics for Labeled Markov Systems
NASA Technical Reports Server (NTRS)
Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash
1999-01-01
Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.
Markov Chains and Chemical Processes
ERIC Educational Resources Information Center
Miller, P. J.
1972-01-01
Views as important the relating of abstract ideas of modern mathematics now being taught in the schools to situations encountered in the sciences. Describes use of matrices and Markov chains to study first-order processes. (Author/DF)
Karimzadeh, Afshin; Raeissadat, Seyed Ahmad; Erfani Fam, Saleh; Sedighipour, Leyla; Babaei-Ghazani, Arash
2017-03-01
Plantar fasciitis is the most common cause of heel pain. Local injection modalities are among treatment options in patients with resistant pain. The aim of the present study was to evaluate the effect of local autologous whole blood compared with corticosteroid local injection in treatment of plantar fasciitis. In this randomized controlled multicenter study, 36 patients with chronic plantar fasciitis were recruited. Patients were allocated randomly into three treatment groups: local autologous blood, local corticosteroid injection, and control groups receiving no injection. Patients were assessed with visual analog scale (VAS), pressure pain threshold (PPT), and plantar fasciitis pain/disability scale (PFPS) before treatment, as well as 4 and 12 weeks post therapy. Variables of pain and function improved significantly in both corticosteroid and autologous blood groups compared to control group. At 4 weeks following treatment, patients in corticosteroid group had significantly lower levels of pain than patients in autologous blood and control groups (higher PPT level, lower PFPS, and VAS). After 12 weeks of treatment, both corticosteroid and autologous blood groups had lower average levels of pain than control group. The corticosteroid group showed an early sharp and then more gradual improvement in pain scores, but autologous blood group had a steady gradual drop in pain. Autologous whole blood and corticosteroid local injection can both be considered as effective methods in the treatment of chronic plantar fasciitis. These treatments decrease pain and significantly improve function compared to no treatment.
Tracking Human Pose Using Max-Margin Markov Models.
Zhao, Lin; Gao, Xinbo; Tao, Dacheng; Li, Xuelong
2015-12-01
We present a new method for tracking human pose by employing max-margin Markov models. Representing a human body by part-based models, such as pictorial structure, the problem of pose tracking can be modeled by a discrete Markov random field. Considering max-margin Markov networks provide an efficient way to deal with both structured data and strong generalization guarantees, it is thus natural to learn the model parameters using the max-margin technique. Since tracking human pose needs to couple limbs in adjacent frames, the model will introduce loops and will be intractable for learning and inference. Previous work has resorted to pose estimation methods, which discard temporal information by parsing frames individually. Alternatively, approximate inference strategies have been used, which can overfit to statistics of a particular data set. Thus, the performance and generalization of these methods are limited. In this paper, we approximate the full model by introducing an ensemble of two tree-structured sub-models, Markov networks for spatial parsing and Markov chains for temporal parsing. Both models can be trained jointly using the max-margin technique, and an iterative parsing process is proposed to achieve the ensemble inference. We apply our model on three challengeable data sets, which contains highly varied and articulated poses. Comprehensive experimental results demonstrate the superior performance of our method over the state-of-the-art approaches.
Assessing significance in a Markov chain without mixing.
Chikina, Maria; Frieze, Alan; Pegden, Wesley
2017-03-14
We present a statistical test to detect that a presented state of a reversible Markov chain was not chosen from a stationary distribution. In particular, given a value function for the states of the Markov chain, we would like to show rigorously that the presented state is an outlier with respect to the values, by establishing a [Formula: see text] value under the null hypothesis that it was chosen from a stationary distribution of the chain. A simple heuristic used in practice is to sample ranks of states from long random trajectories on the Markov chain and compare these with the rank of the presented state; if the presented state is a [Formula: see text] outlier compared with the sampled ranks (its rank is in the bottom [Formula: see text] of sampled ranks), then this observation should correspond to a [Formula: see text] value of [Formula: see text] This significance is not rigorous, however, without good bounds on the mixing time of the Markov chain. Our test is the following: Given the presented state in the Markov chain, take a random walk from the presented state for any number of steps. We prove that observing that the presented state is an [Formula: see text]-outlier on the walk is significant at [Formula: see text] under the null hypothesis that the state was chosen from a stationary distribution. We assume nothing about the Markov chain beyond reversibility and show that significance at [Formula: see text] is best possible in general. We illustrate the use of our test with a potential application to the rigorous detection of gerrymandering in Congressional districting.
Robust 3D object localization and pose estimation for random bin picking with the 3DMaMa algorithm
NASA Astrophysics Data System (ADS)
Skotheim, Øystein; Thielemann, Jens T.; Berge, Asbjørn; Sommerfelt, Arne
2010-02-01
Enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. We describe a system that consists of a structured light instrument for capturing 3D data and a robust approach for object location and pose estimation. The method does not depend on segmentation of range images, but instead searches through pairs of 2D manifolds to localize candidates for object match. This leads to an algorithm that is not very sensitive to scene complexity or the number of objects in the scene. Furthermore, the strategy for candidate search is easily reconfigurable to arbitrary objects. Experiments reported in this paper show the utility of the method on a general random bin picking problem, in this paper exemplified by localization of car parts with random position and orientation. Full pose estimation is done in less than 380 ms per image. We believe that the method is applicable for a wide range of industrial automation problems where precise localization of 3D objects in a scene is needed.
On Markov parameters in system identification
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.
NASA Astrophysics Data System (ADS)
Lee, Kean Loon; Grémaud, Benoît; Miniatura, Christian
2014-10-01
As recently discovered [T. Karpiuk et al., Phys. Rev. Lett. 109, 190601 (2012), 10.1103/PhysRevLett.109.190601], Anderson localization in a bulk disordered system triggers the emergence of a coherent forward scattering (CFS) peak in momentum space, which twins the well-known coherent backscattering (CBS) peak observed in weak localization experiments. Going beyond the perturbative regime, we address here the long-time dynamics of the CFS peak in a one-dimensional random system and we relate this novel interference effect to the statistical properties of the eigenfunctions and eigenspectrum of the corresponding random Hamiltonian. Our numerical results show that the dynamics of the CFS peak is governed by the logarithmic level repulsion between localized states, with a time scale that is, with good accuracy, twice the Heisenberg time. This is in perfect agreement with recent findings based on the nonlinear sigma model. In the stationary regime, the width of the CFS peak in momentum space is inversely proportional to the localization length, reflecting the exponential decay of the eigenfunctions in real space, while its height is exactly twice the background, reflecting the Poisson statistical properties of the eigenfunctions. It would be interesting to extend our results to higher dimensional systems and other symmetry classes.
Monte Carlo non-local means: random sampling for large-scale image filtering.
Chan, Stanley H; Zickler, Todd; Lu, Yue M
2014-08-01
We propose a randomized version of the nonlocal means (NLM) algorithm for large-scale image filtering. The new algorithm, called Monte Carlo nonlocal means (MCNLM), speeds up the classical NLM by computing a small subset of image patch distances, which are randomly selected according to a designed sampling pattern. We make two contributions. First, we analyze the performance of the MCNLM algorithm and show that, for large images or large external image databases, the random outcomes of MCNLM are tightly concentrated around the deterministic full NLM result. In particular, our error probability bounds show that, at any given sampling ratio, the probability for MCNLM to have a large deviation from the original NLM solution decays exponentially as the size of the image or database grows. Second, we derive explicit formulas for optimal sampling patterns that minimize the error probability bound by exploiting partial knowledge of the pairwise similarity weights. Numerical experiments show that MCNLM is competitive with other state-of-the-art fast NLM algorithms for single-image denoising. When applied to denoising images using an external database containing ten billion patches, MCNLM returns a randomized solution that is within 0.2 dB of the full NLM solution while reducing the runtime by three orders of magnitude.
NASA Astrophysics Data System (ADS)
Finkemeier, Frank; von Niessen, Wolfgang
1998-08-01
Three different models for a-Si are studied with respect to the vibrational density of states (VDOS) and phonon localization. The degree of disorder is varied for each model in a large range. For all models structural properties are investigated in connection with the VDOS. Phonon localization is examined via scaling approaches and mobility edges are quantified. Two of the models are continuous random networks (CRN's): the vacancy model and the Wooten-Winer-Weaire (WWW) model both relaxed with the Keating potential. The vacancy model causes the appearance of an artificial high-energy shoulder of the TO peak, which leads to wrong predictions on localization too. This shortcoming of the vacancy model is caused by a second maximum of the bond angle distribution at large angles. The WWW model is here the superior CRN model for a-Si. It allows a good reproduction of the experimental VDOS and possesses only about 1% localized states at the upper edge of the VDOS. In the third model, the WWW model relaxed with the Stillinger-Weber potential, dangling bonds and floating bonds are introduced. Its only shortcoming is an artificial maximum in the radial distribution function below the second diffraction peak. Due to defects extra modes at low energies are found that are highly dependent on the quality of the relaxation. The VDOS is well reproduced. About 2% of the modes at high energies are localized. The modes at lowest energies look localized, when systems below 2000 atoms are studied. It turns out that large systems up to 8000 atoms and many independent realizations are required to interpret the phonon properties correctly. The amount of localization is found to be independent of the degree of disorder present in the model, but an increase in the number of localized states with decreasing density is observed. The present investigation permits statements about the suitability of models for amorphous solids, relaxation procedures, standard potentials, and procedures to
Non-Amontons-Coulomb local friction law of randomly rough contact interfaces with rubber
NASA Astrophysics Data System (ADS)
Nguyen, Danh Toan; Wandersman, Elie; Prevost, Alexis; Le Chenadec, Yohan; Fretigny, Christian; Chateauminois, Antoine
2013-12-01
We report on measurements of the local friction law at a multi-contact interface formed between a smooth rubber and statistically rough glass lenses, under steady-state friction. Using contact imaging, surface displacements are measured, and inverted to extract both distributions of frictional shear stress and contact pressure with a spatial resolution of about 10\\ \\mu\\text{m} . For a glass surface whose topography is self-affine with a Gaussian height asperity distribution, the local frictional shear stress is found to vary sub-linearly with the local contact pressure over the whole investigated pressure range. Such sub-linear behavior is also evidenced for a surface with a non-Gaussian height asperity distribution, demonstrating that, for such multi-contact interfaces, Amontons-Coulomb's friction law does not prevail at the local scale.
A self-consistent theory of localization in nonlinear random media
NASA Astrophysics Data System (ADS)
Cherroret, Nicolas
2017-01-01
The self-consistent theory of localization is generalized to account for a weak quadratic nonlinear potential in the wave equation. For spreading wave packets, the theory predicts the destruction of Anderson localization by the nonlinearity and its replacement by algebraic subdiffusion, while classical diffusion remains unaffected. In 3D, this leads to the emergence of a subdiffusion-diffusion transition in place of the Anderson transition. The accuracy and the limitations of the theory are discussed.
Markov Analysis of Sleep Dynamics
NASA Astrophysics Data System (ADS)
Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.
2009-05-01
A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.
Localization of a two-component Bose-Einstein condensate in a one-dimensional random potential
NASA Astrophysics Data System (ADS)
Xi, Kui-Tian; Li, Jinbin; Shi, Da-Ning
2015-02-01
We consider a weakly interacting two-component Bose-Einstein condensate (BEC) in a one-dimensional random speckle potential. The problem is studied with solutions of Gross-Pitaevskii (GP) equations by means of numerical method in Crank-Nicolson scheme. Properties of various cases owing to the competition of disorder and repulsive interactions of a cigar-shaped two-component BEC are discussed in detail. It is shown that in the central region, phase separation of a two-component BEC is not only affected by the intra- and inter-component interactions, but also influenced by the strength of the random speckle potential. Due to the strong disorder of the potential, the criterion of phase separation which is independent of the trap strength in an ordered potential, such as a harmonic potential, is no longer available. The influence of different random numbers generated by distinct processes on localization of BEC in the random potential is also investigated, as well as the configurations of the density profiles in the tail regions.
On a Result for Finite Markov Chains
ERIC Educational Resources Information Center
Kulathinal, Sangita; Ghosh, Lagnojita
2006-01-01
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…
Faggiani, Rémi; Baron, Alexandre; Zang, Xiaorun; Lalouat, Loïc; Schulz, Sebastian A.; O’Regan, Bryan; Vynck, Kevin; Cluzel, Benoît; de Fornel, Frédérique; Krauss, Thomas F.; Lalanne, Philippe
2016-01-01
Light localization due to random imperfections in periodic media is paramount in photonics research. The group index is known to be a key parameter for localization near photonic band edges, since small group velocities reinforce light interaction with imperfections. Here, we show that the size of the smallest localized mode that is formed at the band edge of a one-dimensional periodic medium is driven instead by the effective photon mass, i.e. the flatness of the dispersion curve. Our theoretical prediction is supported by numerical simulations, which reveal that photonic-crystal waveguides can exhibit surprisingly small localized modes, much smaller than those observed in Bragg stacks thanks to their larger effective photon mass. This possibility is demonstrated experimentally with a photonic-crystal waveguide fabricated without any intentional disorder, for which near-field measurements allow us to distinctly observe a wavelength-scale localized mode despite the smallness (~1/1000 of a wavelength) of the fabrication imperfections. PMID:27246902
Soufi, M; Asl, A Kamali; Geramifar, P
2015-06-15
Purpose: The objective of this study was to find the best seed localization parameters in random walk algorithm application to lung tumor delineation in Positron Emission Tomography (PET) images. Methods: PET images suffer from statistical noise and therefore tumor delineation in these images is a challenging task. Random walk algorithm, a graph based image segmentation technique, has reliable image noise robustness. Also its fast computation and fast editing characteristics make it powerful for clinical purposes. We implemented the random walk algorithm using MATLAB codes. The validation and verification of the algorithm have been done by 4D-NCAT phantom with spherical lung lesions in different diameters from 20 to 90 mm (with incremental steps of 10 mm) and different tumor to background ratios of 4:1 and 8:1. STIR (Software for Tomographic Image Reconstruction) has been applied to reconstruct the phantom PET images with different pixel sizes of 2×2×2 and 4×4×4 mm{sup 3}. For seed localization, we selected pixels with different maximum Standardized Uptake Value (SUVmax) percentages, at least (70%, 80%, 90% and 100%) SUVmax for foreground seeds and up to (20% to 55%, 5% increment) SUVmax for background seeds. Also, for investigation of algorithm performance on clinical data, 19 patients with lung tumor were studied. The resulted contours from algorithm have been compared with nuclear medicine expert manual contouring as ground truth. Results: Phantom and clinical lesion segmentation have shown that the best segmentation results obtained by selecting the pixels with at least 70% SUVmax as foreground seeds and pixels up to 30% SUVmax as background seeds respectively. The mean Dice Similarity Coefficient of 94% ± 5% (83% ± 6%) and mean Hausdorff Distance of 1 (2) pixels have been obtained for phantom (clinical) study. Conclusion: The accurate results of random walk algorithm in PET image segmentation assure its application for radiation treatment planning and
NASA Technical Reports Server (NTRS)
Grosse, Ralf
1990-01-01
Propagation of sound through the turbulent atmosphere is a statistical problem. The randomness of the refractive index field causes sound pressure fluctuations. Although no general theory to predict sound pressure statistics from given refractive index statistics exists, there are several approximate solutions to the problem. The most common approximation is the parabolic equation method. Results obtained by this method are restricted to small refractive index fluctuations and to small wave lengths. While the first condition is generally met in the atmosphere, it is desirable to overcome the second. A generalization of the parabolic equation method with respect to the small wave length restriction is presented.
Artan, Yusuf; Haider, Masoom A; Langer, Deanna L; van der Kwast, Theodorus H; Evans, Andrew J; Yang, Yongyi; Wernick, Miles N; Trachtenberg, John; Yetik, Imam Samil
2010-09-01
Prostate cancer is a leading cause of cancer death for men in the United States. Fortunately, the survival rate for early diagnosed patients is relatively high. Therefore, in vivo imaging plays an important role for the detection and treatment of the disease. Accurate prostate cancer localization with noninvasive imaging can be used to guide biopsy, radiotherapy, and surgery as well as to monitor disease progression. Magnetic resonance imaging (MRI) performed with an endorectal coil provides higher prostate cancer localization accuracy, when compared to transrectal ultrasound (TRUS). However, in general, a single type of MRI is not sufficient for reliable tumor localization. As an alternative, multispectral MRI, i.e., the use of multiple MRI-derived datasets, has emerged as a promising noninvasive imaging technique for the localization of prostate cancer; however almost all studies are with human readers. There is a significant inter and intraobserver variability for human readers, and it is substantially difficult for humans to analyze the large dataset of multispectral MRI. To solve these problems, this study presents an automated localization method using cost-sensitive support vector machines (SVMs) and shows that this method results in improved localization accuracy than classical SVM. Additionally, we develop a new segmentation method by combining conditional random fields (CRF) with a cost-sensitive framework and show that our method further improves cost-sensitive SVM results by incorporating spatial information. We test SVM, cost-sensitive SVM, and the proposed cost-sensitive CRF on multispectral MRI datasets acquired from 21 biopsy-confirmed cancer patients. Our results show that multispectral MRI helps to increase the accuracy of prostate cancer localization when compared to single MR images; and that using advanced methods such as cost-sensitive SVM as well as the proposed cost-sensitive CRF can boost the performance significantly when compared to SVM.
Semi-Markov Models for Degradation-Based Reliability
2010-01-01
standard analysis techniques for Markov processes can be employed (cf. Whitt (1984), Altiok (1985), Perros (1994), and Osogami and Harchol-Balter...We want to approximate X by a PH random variable, sayY, with c.d.f. Ĥ. Marie (1980), Altiok (1985), Johnson (1993), Perros (1994), and Osogami and...provides a minimal representation when matching only two moments. By considering the guidance provided by Marie (1980), Whitt (1984), Altiok (1985), Perros
Sunspots and ENSO relationship using Markov method
NASA Astrophysics Data System (ADS)
Hassan, Danish; Iqbal, Asif; Ahmad Hassan, Syed; Abbas, Shaheen; Ansari, Muhammad Rashid Kamal
2016-01-01
The various techniques have been used to confer the existence of significant relations between the number of Sunspots and different terrestrial climate parameters such as rainfall, temperature, dewdrops, aerosol and ENSO etc. Improved understanding and modelling of Sunspots variations can explore the information about the related variables. This study uses a Markov chain method to find the relations between monthly Sunspots and ENSO data of two epochs (1996-2009 and 1950-2014). Corresponding transition matrices of both data sets appear similar and it is qualitatively evaluated by high values of 2-dimensional correlation found between transition matrices of ENSO and Sunspots. The associated transition diagrams show that each state communicates with the others. Presence of stronger self-communication (between same states) confirms periodic behaviour among the states. Moreover, closeness found in the expected number of visits from one state to the other show the existence of a possible relation between Sunspots and ENSO data. Moreover, perfect validation of dependency and stationary tests endorses the applicability of the Markov chain analyses on Sunspots and ENSO data. This shows that a significant relation between Sunspots and ENSO data exists. Improved understanding and modelling of Sunspots variations can help to explore the information about the related variables. This study can be useful to explore the influence of ENSO related local climatic variability.
Geoacoustic Inversion and Source Localization in a Randomly Fluctuating Shallow Water Environment
2010-06-01
with a standard deviation of 570 m. 2.2 SVV06 experiment data analysis: Sei whale localization Comparatively little is known about sei whale ...large number of sei whale calls were unexpectedly collected during the SW06 experiment, which introduced the first evidence of sei whales in this...shallow water region. Using the normal mode approach developed in this project, we are able to track the remote locations of these whales up to tens of
Local and cluster critical dynamics of the 3d random-site Ising model
NASA Astrophysics Data System (ADS)
Ivaneyko, D.; Ilnytskyi, J.; Berche, B.; Holovatch, Yu.
2006-10-01
We present the results of Monte Carlo simulations for the critical dynamics of the three-dimensional site-diluted quenched Ising model. Three different dynamics are considered, these correspond to the local update Metropolis scheme as well as to the Swendsen-Wang and Wolff cluster algorithms. The lattice sizes of L=10-96 are analysed by a finite-size-scaling technique. The site dilution concentration p=0.85 was chosen to minimize the correction-to-scaling effects. We calculate numerical values of the dynamical critical exponents for the integrated and exponential autocorrelation times for energy and magnetization. As expected, cluster algorithms are characterized by lower values of dynamical critical exponent than the local one: also in the case of dilution critical slowing down is more pronounced for the Metropolis algorithm. However, the striking feature of our estimates is that they suggest that dilution leads to decrease of the dynamical critical exponent for the cluster algorithms. This phenomenon is quite opposite to the local dynamics, where dilution enhances critical slowing down.
NASA Astrophysics Data System (ADS)
Morizet, N.; Godin, N.; Tang, J.; Maillet, E.; Fregonese, M.; Normand, B.
2016-03-01
This paper aims to propose a novel approach to classify acoustic emission (AE) signals deriving from corrosion experiments, even if embedded into a noisy environment. To validate this new methodology, synthetic data are first used throughout an in-depth analysis, comparing Random Forests (RF) to the k-Nearest Neighbor (k-NN) algorithm. Moreover, a new evaluation tool called the alter-class matrix (ACM) is introduced to simulate different degrees of uncertainty on labeled data for supervised classification. Then, tests on real cases involving noise and crevice corrosion are conducted, by preprocessing the waveforms including wavelet denoising and extracting a rich set of features as input of the RF algorithm. To this end, a software called RF-CAM has been developed. Results show that this approach is very efficient on ground truth data and is also very promising on real data, especially for its reliability, performance and speed, which are serious criteria for the chemical industry.
Chao, Ming; Wu, Hao; Jin, Kai; Li, Bin; Wu, Jianjun; Zhang, Guangqiang; Yang, Gong; Hu, Xun
2016-01-01
Study design: Previous works suggested that neutralizing intratumoral lactic acidosis combined with glucose deprivation may deliver an effective approach to control tumor. We did a pilot clinical investigation, including a nonrandomized (57 patients with large HCC) and a randomized controlled (20 patients with large HCC) studies. Methods: The patients were treated with transarterial chemoembolization (TACE) with or without bicarbonate local infusion into tumor. Results: In the nonrandomized controlled study, geometric mean of viable tumor residues (VTR) in TACE with bicarbonate was 6.4-fold lower than that in TACE without bicarbonate (7.1% [95% CI: 4.6%–10.9%] vs 45.6% [28.9%–72.0%]; p<0.0001). This difference was recapitulated by a subsequent randomized controlled study. TACE combined with bicarbonate yielded a 100% objective response rate (ORR), whereas the ORR treated with TACE alone was 44.4% (nonrandomized) and 63.6% (randomized). The survival data suggested that bicarbonate may bring survival benefit. Conclusion: Bicarbonate markedly enhances the anticancer activity of TACE. Clinical trail registration: ChiCTR-IOR-14005319. DOI: http://dx.doi.org/10.7554/eLife.15691.001 PMID:27481188
Lumbroso-Le Rouic, L; Aerts, I; Hajage, D; Lévy-Gabriel, C; Savignoni, A; Algret, N; Cassoux, N; Bertozzi, A-I; Esteve, M; Doz, F; Desjardins, L
2016-01-01
Purpose Intraocular retinoblastoma treatments often combine chemotherapy and focal treatments. A first prospective protocol of conservative treatments in our institution showed the efficacy of the use of two courses of chemoreduction with etoposide and carboplatin, followed by chemothermotherapy using carboplatin as a single agent and diode laser. In order to decrease the possible long-term toxicity of chemotherapy due to etoposide, a randomized neoadjuvant phase II protocol was conducted using vincristine–carboplatin vs etoposide–carboplatin. Patients and methods The study was proposed when initial tumor characteristics did not allow front-line local treatments. Patients included in this phase II noncomparative randomized study of neoadjuvant chemotherapy received vincristin–carboplatin (new arm) vs etoposide–carboplatin (our reference arm). They were subsequently treated by local treatments and chemothermotherapy. Primary end point was the need for secondary enucleation or external beam radiotherapy (EBRT) not exceeding 40% at 2 years. Results A total of 65 eyes in 55 children were included in the study (May 2004 to August 2009). Of these, 32 eyes (27 children) were treated in the arm etoposide–carboplatin and 33 eyes (28 children) in the arm vincristin–carboplatin. At 2 years after treatment, 23/33 (69.7%) eyes were treated and salvaged without EBRT or enucleation in the arm vincristin–carboplatin and 26/32 (81.2%) in the arm etoposide–carboplatin. Conclusion Even if the two treatment arms could be considered as sufficiently active according to the study decision rules, neoadjuvant chemotherapy by two cycles of vincristine–carboplatin followed by chemothermotherapy appear to offer less optimal local control than the etoposide–carboplatin combination. PMID:26427984
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
El-Gammal, Mona Y; Salem, Ahmed S; Anees, Mohamed M; Tawfik, Mohamed A
2016-04-01
Immediate loading of dental implants in situations where low bone density exist, such as the posterior maxillary region, became possible recently after the introduction of biomimetic agents. This 1-year preliminary clinical trial was carried out to clinically and radiographically evaluate immediate-loaded 1-piece implants with local application of melatonin in the osteotomy site as a biomimetic material. 14 patients with missing maxillary premolars were randomized to receive 14 implants of 1-piece type that were subjected to immediate loading after 2 weeks of initial placement. Group I included 7 implants with acid-etched surface while group II included 7 implants with acid-etched surface combined with local application of melatonin gel at the osteotomy site. Patients were recalled for follow up at 1, 3, 6, and 12 months after loading. All implants were considered successful after 12 months of follow-up. Significant difference (P < 0.05) was found between both groups at 1 month of implant loading when considering the implant stability. At 1 and 3 months there were significant differences in the marginal bone level between the 2 groups. These results suggest that the local application of melatonin at the osteotomy site is associated with good stability and minimal bone resorption. However, more studies for longer follow-up periods are required to confirm the effect of melatonin hormone on osseointegration of dental implants.
Hadianfard, Mohammadjavad; Ashraf, Alireza; Fakheri, Maryamsadat; Nasiri, Aref
2014-06-01
There is no consensus on the management of De Quervain's tenosynovitis, but local corticosteroid injection is considered the mainstay of treatment. However, some patients are reluctant to take steroid injections. This study was performed to compare the efficacy of acupuncture versus corticosteroid injection for the treatment of this disease. Thirty patients were consequently treated in two groups. The acupuncture group received five acupuncture sessions of 30 minutes duration on classic points of LI-5, LU-7, and LU-9 and on ahshi points. The injection group received one methylprednisolone acetate injection in the first dorsal compartment of the wrist. The degree of disability and pain was evaluated by using the Quick Disabilities of the Arm, Shoulder, and Hand (Q-DASH) scale and the Visual Analogue Scale (VAS) at baseline and at 2 weeks and 6 weeks after the start of treatment. The baseline means of the Q-DASH and the VAS scores were 62.8 and 6.9, respectively. At the last follow-up, the mean Q-DASH scores were 9.8 versus 6.2 in the acupuncture and injection groups, respectively, and the mean VAS scores were 2 versus 1.2. We demonstrated short-term improvement of pain and function in both groups. Although the success rate was somewhat higher with corticosteroid injection, acupuncture can be considered as an alternative option for treatment of De Quervain's tenosynovitis.
Beckendorf, Veronique; Guerif, Stephane; Le Prise, Elisabeth; Cosset, Jean-Marc; Bougnoux, Agnes; Chauvet, Bruno; Salem, Naji; Chapet, Olivier; Bourdain, Sylvain; Bachaud, Jean-Marc; Maingon, Philippe; Hannoun-Levi, Jean-Michel; Malissard, Luc; Simon, Jean-Marc; Pommier, Pascal; Hay, Men; Dubray, Bernard; Lagrange, Jean-Leon; Luporsi, Elisabeth; Bey, Pierre
2011-07-15
Purpose: To perform a randomized trial comparing 70 and 80 Gy radiotherapy for prostate cancer. Patients and Methods: A total of 306 patients with localized prostate cancer were randomized. No androgen deprivation was allowed. The primary endpoint was biochemical relapse according to the modified 1997-American Society for Therapeutic Radiology and Oncology and Phoenix definitions. Toxicity was graded using the Radiation Therapy Oncology Group 1991 criteria and the late effects on normal tissues-subjective, objective, management, analytic scales (LENT-SOMA) scales. The patients' quality of life was scored using the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire 30-item cancer-specific and 25-item prostate-specific modules. Results: The median follow-up was 61 months. According to the 1997-American Society for Therapeutic Radiology and Oncology definition, the 5-year biochemical relapse rate was 39% and 28% in the 70- and 80-Gy arms, respectively (p = .036). Using the Phoenix definition, the 5-year biochemical relapse rate was 32% and 23.5%, respectively (p = .09). The subgroup analysis showed a better biochemical outcome for the higher dose group with an initial prostate-specific antigen level >15 ng/mL. At the last follow-up date, 26 patients had died, 10 of their disease and none of toxicity, with no differences between the two arms. According to the Radiation Therapy Oncology Group scale, the Grade 2 or greater rectal toxicity rate was 14% and 19.5% for the 70- and 80-Gy arms (p = .22), respectively. The Grade 2 or greater urinary toxicity was 10% at 70 Gy and 17.5% at 80 Gy (p = .046). Similar results were observed using the LENT-SOMA scale. Bladder toxicity was more frequent at 80 Gy than at 70 Gy (p = .039). The quality-of-life questionnaire results before and 5 years after treatment were available for 103 patients with no differences found between the 70- and 80-Gy arms. Conclusion: High-dose radiotherapy provided a
Complex networks: when random walk dynamics equals synchronization
NASA Astrophysics Data System (ADS)
Kriener, Birgit; Anand, Lishma; Timme, Marc
2012-09-01
Synchrony prevalently emerges from the interactions of coupled dynamical units. For simple systems such as networks of phase oscillators, the asymptotic synchronization process is assumed to be equivalent to a Markov process that models standard diffusion or random walks on the same network topology. In this paper, we analytically derive the conditions for such equivalence for networks of pulse-coupled oscillators, which serve as models for neurons and pacemaker cells interacting by exchanging electric pulses or fireflies interacting via light flashes. We find that the pulse synchronization process is less simple, but there are classes of, e.g., network topologies that ensure equivalence. In particular, local dynamical operators are required to be doubly stochastic. These results provide a natural link between stochastic processes and deterministic synchronization on networks. Tools for analyzing diffusion (or, more generally, Markov processes) may now be transferred to pin down features of synchronization in networks of pulse-coupled units such as neural circuits.
The generalization ability of online SVM classification based on Markov sampling.
Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang
2015-03-01
In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.
Beevi, K Sabeena; Nair, Madhu S; Bindu, G R
2016-08-01
The exact measure of mitotic nuclei is a crucial parameter in breast cancer grading and prognosis. This can be achieved by improving the mitotic detection accuracy by careful design of segmentation and classification techniques. In this paper, segmentation of nuclei from breast histopathology images are carried out by Localized Active Contour Model (LACM) utilizing bio-inspired optimization techniques in the detection stage, in order to handle diffused intensities present along object boundaries. Further, the application of a new optimal machine learning algorithm capable of classifying strong non-linear data such as Random Kitchen Sink (RKS), shows improved classification performance. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for MITOS-ATYPIA CONTEST 2014. The proposed framework achieved 95% recall, 98% precision and 96% F-score.
Monthus, Cécile; Garel, Thomas
2007-08-01
Disordered systems present multifractal properties at criticality. In particular, as discovered by Ludwig [A.W.W. Ludwig, Nucl. Phys. B 330, 639 (1990)] in the case of a diluted two-dimensional Potts model, the moments rho(q) (r) of the local order parameter rho(r) scale with a set x(q) of nontrivial exponents x(q) not = qx(1). We reexamine these ideas to incorporate more recent findings: (i) whenever a multifractal measure w(r) normalized over space sum(r) w(r) = 1 occurs in a random system, it is crucial to distinguish between the typical values and the disorder-averaged values of the generalized moments Y(q) = sum(r) w(q) (r), since they may scale with different generalized dimensions D(q) and D(q), and (ii), as discovered by Wiseman and Domany [S. Wiseman and E. Domany, Phys. Rev. E 52, 3469 (1995)], the presence of an infinite correlation length induces a lack of self-averaging at critical points for thermodynamic observables, in particular for the order parameter. After this general discussion, valid for any random critical point, we apply these ideas to random polymer models that can be studied numerically for large sizes and good statistics over the samples. We study the bidimensional wetting or the Poland-Scheraga DNA model with loop exponents c = 1.5 (marginal disorder) and c = 1.75 (relevant disorder). Finally, we argue that the presence of finite Griffiths-ordered clusters at criticality determines the asymptotic value x(q-->infinity) = d and the minimal value alpha(min) = D(q-->infinity) = d - x(1) of the typical multifractal spectrum f(alpha).
Khosrawi, Saeid; Emadi, Masoud; Mahmoodian, Amir Ebrahim
2016-01-01
Background: The Study aimed to compare the effectiveness of two commonly used conservative treatments, splinting and local steroid injection in improving clinical and nerve conduction findings of the patients with severe carpal tunnel syndrome (CTS). Materials and Methods: In this randomized control clinical trial, the patients with severe CTS selected and randomized in two interventional groups. Group A was prescribed to use full time neutral wrist splint and group B was injected with 40 mg Depo-Medrol and prescribed to use the full time neutral wrist splint for 12 weeks. Clinical and nerve conduction findings of the patients was evaluated at baseline, 4 and 12 weeks after interventions. Results: Twenty-two and 21 patients were allocated in group A and B, respectively. Mean of clinical symptoms and functional status scores, nerve conduction variables and patients’ satisfaction score were not significant between group at baseline and 4 and 12 weeks after intervention. Within the group comparison, there was significant improvement in the patients’ satisfaction, clinical and nerve conduction items between the baseline level and 4 weeks after intervention and between the baseline and 12 weeks after intervention (P < 0.01). The difference was significant for functional status score between 4 and 12 weeks after intervention in group B (P = 0.02). Conclusion: considering some findings regarding the superior effect of splinting plus local steroid injection on functional status scale and median nerve distal motor latency, it seems that using combination therapy could be more effective for long-term period specially in the field of functional improvement of CTS. PMID:26962518
Canyilmaz, Emine; Canyilmaz, Fatih; Aynaci, Ozlem; Colak, Fatma; Serdar, Lasif; Uslu, Gonca Hanedan; Aynaci, Osman; Yoney, Adnan
2015-07-01
Purpose: The purpose of this study was to conduct a randomized trial of radiation therapy for plantar fasciitis and to compare radiation therapy with local steroid injections. Methods and Materials: Between March 2013 and April 2014, 128 patients with plantar fasciitis were randomized to receive radiation therapy (total dose of 6.0 Gy applied in 6 fractions of 1.0 Gy three times a week) or local corticosteroid injections a 1 ml injection of 40 mg methylprednisolone and 0.5 ml 1% lidocaine under the guidance of palpation. The results were measured using a visual analog scale, a modified von Pannewitz scale, and a 5-level function score. The fundamental phase of the study was 3 months, with a follow-up period of up to 6 months. Results: The median follow-up period for all patients was 12.5 months (range, 6.5-18.6 months). For the radiation therapy patients, the median follow-up period was 13 months (range, 6.5-18.5 months), whereas in the palpation-guided (PG) steroid injection arm, it was 12.1 months (range, 6.5-18.6 months). After 3 months, results in the radiation therapy arm were significantly superior to those in the PG steroid injection arm (visual analog scale, P<.001; modified von Pannewitz scale, P<.001; 5-level function score, P<.001). Requirements for a second treatment did not significantly differ between the 2 groups, but the time interval for the second treatment was significantly shorter in the PG steroid injection group (P=.045). Conclusion: This study confirms the superior analgesic effect of radiation therapy compared to mean PG steroid injection on plantar fasciitis for at least 6 months after treatment.
Constructing 1/ωα noise from reversible Markov chains
NASA Astrophysics Data System (ADS)
Erland, Sveinung; Greenwood, Priscilla E.
2007-09-01
This paper gives sufficient conditions for the output of 1/ωα noise from reversible Markov chains on finite state spaces. We construct several examples exhibiting this behavior in a specified range of frequencies. We apply simple representations of the covariance function and the spectral density in terms of the eigendecomposition of the probability transition matrix. The results extend to hidden Markov chains. We generalize the results for aggregations of AR1-processes of C. W. J. Granger [J. Econometrics 14, 227 (1980)]. Given the eigenvalue function, there is a variety of ways to assign values to the states such that the 1/ωα condition is satisfied. We show that a random walk on a certain state space is complementary to the point process model of 1/ω noise of B. Kaulakys and T. Meskauskas [Phys. Rev. E 58, 7013 (1998)]. Passing to a continuous state space, we construct 1/ωα noise which also has a long memory.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies.
Stochastic algorithms for Markov models estimation with intermittent missing data.
Deltour, I; Richardson, S; Le Hesran, J Y
1999-06-01
Multistate Markov models are frequently used to characterize disease processes, but their estimation from longitudinal data is often hampered by complex patterns of incompleteness. Two algorithms for estimating Markov chain models in the case of intermittent missing data in longitudinal studies, a stochastic EM algorithm and the Gibbs sampler, are described. The first can be viewed as a random perturbation of the EM algorithm and is appropriate when the M step is straightforward but the E step is computationally burdensome. It leads to a good approximation of the maximum likelihood estimates. The Gibbs sampler is used for a full Bayesian inference. The performances of the two algorithms are illustrated on two simulated data sets. A motivating example concerned with the modelling of the evolution of parasitemia by Plasmodium falciparum (malaria) in a cohort of 105 young children in Cameroon is described and briefly analyzed.
Markov and semi-Markov processes as a failure rate
NASA Astrophysics Data System (ADS)
Grabski, Franciszek
2016-06-01
In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.
A Hidden Markov Approach to Modeling Interevent Earthquake Times
NASA Astrophysics Data System (ADS)
Chambers, D.; Ebel, J. E.; Kafka, A. L.; Baglivo, J.
2003-12-01
A hidden Markov process, in which the interevent time distribution is a mixture of exponential distributions with different rates, is explored as a model for seismicity that does not follow a Poisson process. In a general hidden Markov model, one assumes that a system can be in any of a finite number k of states and there is a random variable of interest whose distribution depends on the state in which the system resides. The system moves probabilistically among the states according to a Markov chain; that is, given the history of visited states up to the present, the conditional probability that the next state is a specified one depends only on the present state. Thus the transition probabilities are specified by a k by k stochastic matrix. Furthermore, it is assumed that the actual states are unobserved (hidden) and that only the values of the random variable are seen. From these values, one wishes to estimate the sequence of states, the transition probability matrix, and any parameters used in the state-specific distributions. The hidden Markov process was applied to a data set of 110 interevent times for earthquakes in New England from 1975 to 2000. Using the Baum-Welch method (Baum et al., Ann. Math. Statist. 41, 164-171), we estimate the transition probabilities, find the most likely sequence of states, and estimate the k means of the exponential distributions. Using k=2 states, we found the data were fit well by a mixture of two exponential distributions, with means of approximately 5 days and 95 days. The steady state model indicates that after approximately one fourth of the earthquakes, the waiting time until the next event had the first exponential distribution and three fourths of the time it had the second. Three and four state models were also fit to the data; the data were inconsistent with a three state model but were well fit by a four state model.
Bayat, M.; Garajei, A.; Afshari Pour, E.; Hasheminasab, M.; Ghorbani, Y.; Kalantar Motamedi, M. H.; Bahrami, N.
2017-01-01
Background: Although bone grafts are commonly used in reconstructive surgeries, they are sensitive to local perfusion and are thus prone to severe resorption. Biphosphonates can inactivate osteoclasts and can be used to control the undesirable bone resorption. Objective: To assess the effect of administration of biphosphonates on bone resorption. Methods: 20 patients with bony defects who were candidates for free autogenous grafts were randomized into “pamidronate” and “control” groups. Bone segments were soaked in either pamidronate solution or normal saline and were inserted into the area of the surgery. Bone densities were measured post-surgery and in 6-month follow-up. Data were obtained via Digora software and analyzed. Results: The mean±SD bone density in pamidronate group changed from 93.4±14.6 to 93.6±17.5 (p<0.05); in the control group the density decreased from 89.7±13.2 to 78.9±11.4 (p<0.05). The mean difference of bone density in anterior areas of the jaws showed higher DXA in comparison to posterior regions (p=0.002). Conclusion: Locally administered pamidronate affects reduction in bone resorption. PMID:28299027
Naidu, Sinuba; Loughlin, Pat; Coldwell, Susan E.; Noonan, Carolyn J.; Milgrom, Peter
2004-01-01
The aim of this study was to test the hypothesis that dental pain control using infiltration/intrapapillary injection was less effective than inferior alveolar block/long buccal infiltration anesthesia in children. A total of 101 healthy children, aged 5-8 years, who had no contraindication for local anesthetic and who needed a pulpotomy treatment and stainless steel crown placement in a lower primary molar were studied. A 2-group randomized blinded controlled design was employed comparing the 2 local anesthesia techniques using 2% lidocaine, 1:100,000 epinephrine. All children were given 40% nitrous oxide. Children self-reported pain using the Color Analogue Scale. The study was conducted in a private pediatric dental practice in Mount Vernon, Wash. Overall pain levels reported by the children were low, and there were no differences between conditions at any point in the procedure. Pain reports for clamp placement were block/long buccal 2.8 and infiltration/intrapapillary 1.9 (P = .1). Pain reports for drilling were block/long buccal 2.0 and infiltration/intrapapillary 1.8 (P = .7). Nine percent of children required supplementary local anesthetic: 4 of 52 (7.7%) in the block/long buccal group and 5 of 49 (10.2%) in the infiltration/intrapapillary group (P = .07). The hypothesis that block/long buccal would be more effective than infiltration/intrapapillary was not supported. There was no difference in pain control effectiveness between infiltration/intrapapillary injection and inferior alveolar block/long buccal infiltration using 2% lidocaine with 1:100,000 epinephrine when mandibular primary molars received pulpotomy treatment and stainless steel crowns. PMID:15106686
Kuldeep, CM; Singhal, Himanshu; Khare, Ashok Kumar; Mittal, Asit; Gupta, Lalit K; Garg, Anubhav
2011-01-01
Background: Alopecia areata (AA) is a common, non-scarring, patchy loss of hair at scalp and elsewhere. Its pathogenesis is uncertain; however, auto-immunity has been exemplified in various studies. Familial incidence of AA is 10-42%, but in monozygotic twins is 50%. Local steroids (topical / intra-lesional) are very effective in treatment of localized AA. Aim: To compare hair regrowth and side effects of topical betamethasone valerate foam, intralesional triamcinolone acetonide and tacrolimus ointment in management of localized AA. Materials and Methods: 105 patients of localized AA were initially registered but 27 were drop out. So, 78 patients allocated at random in group A (28), B (25) and C (25) were prescribed topical betamethasone valerate foam (0.1%) twice daily, intralesional triamcinolone acetonide (10mg/ml) every 3 weeks and tacrolimus ointment (0.1%) twice daily, respectively, for 12 weeks. They were followed for next12 weeks. Hair re-growth was calculated using “HRG Scale”; scale I- (0-25%), S II-(26-50%), S III - (51-75%) and S IV- (75-100%). Results: Hair re-growth started by 3 weeks in group B (Scale I: P<0.03), turned satisfactory at 6 weeks in group A and B (Scale I: P<0.005, Scale IV: P<0.001)), good at 9 weeks (Scale I: P<0.0005, Scale IV: P<0.00015), and better by 12 weeks of treatment (Scale I: P<0.000021, Scale IV: P<0.000009) in both A and B groups. At the end of 12 weeks follow-up hair re-growth (>75%, HRG IV) was the best in group B (15 of 25, 60%), followed by A (15 of 28, 53.6%) and lastly group-C (Nil of 25, 0%) patients. Few patients reported mild pain and atrophy at injection sites, pruritus and burning with betamethasone valerate foam and tacrolimus. Conclusion: Intralesional triamcinolone acetonide is the best, betamethasone valerate foam is better than tacrolimus in management of localized AA. PMID:21769231
Facial Sketch Synthesis Using 2D Direct Combined Model-Based Face-Specific Markov Network.
Tu, Ching-Ting; Chan, Yu-Hsien; Chen, Yi-Chung
2016-08-01
A facial sketch synthesis system is proposed, featuring a 2D direct combined model (2DDCM)-based face-specific Markov network. In contrast to the existing facial sketch synthesis systems, the proposed scheme aims to synthesize sketches, which reproduce the unique drawing style of a particular artist, where this drawing style is learned from a data set consisting of a large number of image/sketch pairwise training samples. The synthesis system comprises three modules, namely, a global module, a local module, and an enhancement module. The global module applies a 2DDCM approach to synthesize the global facial geometry and texture of the input image. The detailed texture is then added to the synthesized sketch in a local patch-based manner using a parametric 2DDCM model and a non-parametric Markov random field (MRF) network. Notably, the MRF approach gives the synthesized results an appearance more consistent with the drawing style of the training samples, while the 2DDCM approach enables the synthesis of outcomes with a more derivative style. As a result, the similarity between the synthesized sketches and the input images is greatly improved. Finally, a post-processing operation is performed to enhance the shadowed regions of the synthesized image by adding strong lines or curves to emphasize the lighting conditions. The experimental results confirm that the synthesized facial images are in good qualitative and quantitative agreement with the input images as well as the ground-truth sketches provided by the same artist. The representing power of the proposed framework is demonstrated by synthesizing facial sketches from input images with a wide variety of facial poses, lighting conditions, and races even when such images are not included in the training data set. Moreover, the practical applicability of the proposed framework is demonstrated by means of automatic facial recognition tests.
Jakobsen, Anders; Ploen, John; Vuong, Te; Appelt, Ane; Lindebjerg, Jan; Rafaelsen, Soren R.
2012-11-15
Purpose: Locally advanced rectal cancer represents a major therapeutic challenge. Preoperative chemoradiation therapy is considered standard, but little is known about the dose-effect relationship. The present study represents a dose-escalation phase III trial comparing 2 doses of radiation. Methods and Materials: The inclusion criteria were resectable T3 and T4 tumors with a circumferential margin of {<=}5 mm on magnetic resonance imaging. The patients were randomized to receive 50.4 Gy in 28 fractions to the tumor and pelvic lymph nodes (arm A) or the same treatment supplemented with an endorectal boost given as high-dose-rate brachytherapy (10 Gy in 2 fractions; arm B). Concomitant chemotherapy, uftoral 300 mg/m{sup 2} and L-leucovorin 22.5 mg/d, was added to both arms on treatment days. The primary endpoint was complete pathologic remission. The secondary endpoints included tumor response and rate of complete resection (R0). Results: The study included 248 patients. No significant difference was found in toxicity or surgical complications between the 2 groups. Based on intention to treat, no significant difference was found in the complete pathologic remission rate between the 2 arms (18% and 18%). The rate of R0 resection was different in T3 tumors (90% and 99%; P=.03). The same applied to the rate of major response (tumor regression grade, 1+2), 29% and 44%, respectively (P=.04). Conclusions: This first randomized trial comparing 2 radiation doses indicated that the higher dose increased the rate of major response by 50% in T3 tumors. The endorectal boost is feasible, with no significant increase in toxicity or surgical complications.
On Markov Earth Mover’s Distance
Wei, Jie
2015-01-01
In statistics, pattern recognition and signal processing, it is of utmost importance to have an effective and efficient distance to measure the similarity between two distributions and sequences. In statistics this is referred to as goodness-of-fit problem. Two leading goodness of fit methods are chi-square and Kolmogorov–Smirnov distances. The strictly localized nature of these two measures hinders their practical utilities in patterns and signals where the sample size is usually small. In view of this problem Rubner and colleagues developed the earth mover’s distance (EMD) to allow for cross-bin moves in evaluating the distance between two patterns, which find a broad spectrum of applications. EMD-L1 was later proposed to reduce the time complexity of EMD from super-cubic by one order of magnitude by exploiting the special L1 metric. EMD-hat was developed to turn the global EMD to a localized one by discarding long-distance earth movements. In this work, we introduce a Markov EMD (MEMD) by treating the source and destination nodes absolutely symmetrically. In MEMD, like hat-EMD, the earth is only moved locally as dictated by the degree d of neighborhood system. Nodes that cannot be matched locally is handled by dummy source and destination nodes. By use of this localized network structure, a greedy algorithm that is linear to the degree d and number of nodes is then developed to evaluate the MEMD. Empirical studies on the use of MEMD on deterministic and statistical synthetic sequences and SIFT-based image retrieval suggested encouraging performances. PMID:25983362
Using Games to Teach Markov Chains
ERIC Educational Resources Information Center
Johnson, Roger W.
2003-01-01
Games are promoted as examples for classroom discussion of stationary Markov chains. In a game context Markov chain terminology and results are made concrete, interesting, and entertaining. Game length for several-player games such as "Hi Ho! Cherry-O" and "Chutes and Ladders" is investigated and new, simple formulas are given. Slight…
Semi-Markov Unreliability-Range Evaluator
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1988-01-01
Reconfigurable, fault-tolerant systems modeled. Semi-Markov unreliability-range evaluator (SURE) computer program is software tool for analysis of reliability of reconfigurable, fault-tolerant systems. Based on new method for computing death-state probabilities of semi-Markov model. Computes accurate upper and lower bounds on probability of failure of system. Written in PASCAL.
Building Simple Hidden Markov Models. Classroom Notes
ERIC Educational Resources Information Center
Ching, Wai-Ki; Ng, Michael K.
2004-01-01
Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.
An introduction to hidden Markov models.
Schuster-Böckler, Benjamin; Bateman, Alex
2007-06-01
This unit introduces the concept of hidden Markov models in computational biology. It describes them using simple biological examples, requiring as little mathematical knowledge as possible. The unit also presents a brief history of hidden Markov models and an overview of their current applications before concluding with a discussion of their limitations.
Beitler, Jonathan J.; Zhang, Qiang; Fu, Karen K.; Trotti, Andy; Spencer, Sharon A.; Jones, Christopher U.; Garden, Adam S.; Shenouda, George; Harris, Jonathan; Ang, Kian K.
2014-05-01
Purpose: To test whether altered radiation fractionation schemes (hyperfractionation [HFX], accelerated fractionation, continuous [AFX-C], and accelerated fractionation with split [AFX-S]) improved local-regional control (LRC) rates for patients with squamous cell cancers (SCC) of the head and neck when compared with standard fractionation (SFX) of 70 Gy. Methods and Materials: Patients with stage III or IV (or stage II base of tongue) SCC (n=1076) were randomized to 4 treatment arms: (1) SFX, 70 Gy/35 daily fractions/7 weeks; (2) HFX, 81.6 Gy/68 twice-daily fractions/7 weeks; (3) AFX-S, 67.2 Gy/42 fractions/6 weeks with a 2-week rest after 38.4 Gy; and (4) AFX-C, 72 Gy/42 fractions/6 weeks. The 3 experimental arms were to be compared with SFX. Results: With patients censored for LRC at 5 years, only the comparison of HFX with SFX was significantly different: HFX, hazard ratio (HR) 0.79 (95% confidence interval 0.62-1.00), P=.05; AFX-C, 0.82 (95% confidence interval 0.65-1.05), P=.11. With patients censored at 5 years, HFX improved overall survival (HR 0.81, P=.05). Prevalence of any grade 3, 4, or 5 toxicity at 5 years; any feeding tube use after 180 days; or feeding tube use at 1 year did not differ significantly when the experimental arms were compared with SFX. When 7-week treatments were compared with 6-week treatments, accelerated fractionation appeared to increase grade 3, 4 or 5 toxicity at 5 years (P=.06). When the worst toxicity per patient was considered by treatment only, the AFX-C arm seemed to trend worse than the SFX arm when grade 0-2 was compared with grade 3-5 toxicity (P=.09). Conclusions: At 5 years, only HFX improved LRC and overall survival for patients with locally advanced SCC without increasing late toxicity.
2012-01-01
Background Surgeons in the Netherlands, Canada and the US participate in the FAITH trial (Fixation using Alternative Implants for the Treatment of Hip fractures). Dutch sites are managed and visited by a financed central trial coordinator, whereas most Canadian and US sites have local study coordinators and receive per patient payment. This study was aimed to assess how these different trial management strategies affected trial performance. Methods Details related to obtaining ethics approval, time to trial start-up, inclusion, and percentage completed follow-ups were collected for each trial site and compared. Pre-trial screening data were compared with actual inclusion rates. Results Median trial start-up ranged from 41 days (P25-P75 10-139) in the Netherlands to 232 days (P25-P75 98-423) in Canada (p = 0.027). The inclusion rate was highest in the Netherlands; median 1.03 patients (P25-P75 0.43-2.21) per site per month, representing 34.4% of the total eligible population. It was lowest in Canada; 0.14 inclusions (P25-P75 0.00-0.28), representing 3.9% of eligible patients (p < 0.001). The percentage completed follow-ups was 83% for Canadian and Dutch sites and 70% for US sites (p = 0.217). Conclusions In this trial, a central financed trial coordinator to manage all trial related tasks in participating sites resulted in better trial progression and a similar follow-up. It is therefore a suitable alternative for appointing these tasks to local research assistants. The central coordinator approach can enable smaller regional hospitals to participate in multicenter randomized controlled trials. Circumstances such as available budget, sample size, and geographical area should however be taken into account when choosing a management strategy. Trial Registration ClinicalTrials.gov: NCT00761813 PMID:22225733
Viani, Gustavo Arruda Stefano, Eduardo Jose; Afonso, Sergio Luis
2009-08-01
Purpose: To determine in a meta-analysis whether the outcomes in men with localized prostate cancer treated with high-dose radiotherapy (HDRT) are better than those in men treated with conventional-dose radiotherapy (CDRT), by quantifying the effect of the total dose of radiotherapy on biochemical control (BC). Methods and Materials: The MEDLINE, EMBASE, CANCERLIT, and Cochrane Library databases, as well as the proceedings of annual meetings, were systematically searched to identify randomized, controlled studies comparing HDRT with CDRT for localized prostate cancer. To evaluate the dose-response relationship, we conducted a meta-regression analysis of BC ratios by means of weighted linear regression. Results: Seven RCTs with a total patient population of 2812 were identified that met the study criteria. Pooled results from these RCTs showed a significant reduction in the incidence of biochemical failure in those patients with prostate cancer treated with HDRT (p < 0.0001). However, there was no difference in the mortality rate (p = 0.38) and specific prostate cancer mortality rates (p = 0.45) between the groups receiving HDRT and CDRT. However, there were more cases of late Grade >2 gastrointestinal toxicity after HDRT than after CDRT. In the subgroup analysis, patients classified as being at low (p = 0.007), intermediate (p < 0.0001), and high risk (p < 0.0001) of biochemical failure all showed a benefit from HDRT. The meta-regression analysis also detected a linear correlation between the total dose of radiotherapy and biochemical failure (BC = -67.3 + [1.8 x radiotherapy total dose in Gy]; p = 0.04). Conclusions: Our meta-analysis showed that HDRT is superior to CDRT in preventing biochemical failure in low-, intermediate-, and high-risk prostate cancer patients, suggesting that this should be offered as a treatment for all patients, regardless of their risk status.
Wang, Weiming; Qin, Jing; Zhu, Lei; Ni, Dong; Chui, Yim-Pan; Heng, Pheng-Ann
2014-01-01
Due to the characteristic artifacts of ultrasound images, e.g., speckle noise, shadows and intensity inhomogeneity, traditional intensity-based methods usually have limited success on the segmentation of fetal abdominal contour. This paper presents a novel approach to detect and measure the abdominal contour from fetal ultrasound images in two steps. First, a local phase-based measure called multiscale feature asymmetry (MSFA) is de ned from the monogenic signal to detect the boundaries of fetal abdomen. The MSFA measure is intensity invariant and provides an absolute measurement for the signi cance of features in the image. Second, in order to detect the ellipse that ts to the abdominal contour, the iterative randomized Hough transform is employed to exclude the interferences of the inner boundaries, after which the detected ellipse gradually converges to the outer boundaries of the abdomen. Experimental results in clinical ultrasound images demonstrate the high agreement between our approach and manual approach on the measurement of abdominal circumference (mean sign difference is 0.42% and correlation coef cient is 0.9973), which indicates that the proposed approach can be used as a reliable and accurate tool for obstetrical care and diagnosis.
Souchier, E; D'Acapito, F; Noé, P; Blaise, P; Bernard, M; Jousseaume, V
2015-10-07
Conductive bridging random access memories (CBRAMs) are one of the most promising emerging technologies for the next generation of non-volatile memory. However, the lack of understanding of the switching mechanism at the nanoscale level prevents successful transfer to industry. In this paper, Ag/GeSx/W CBRAM devices are analyzed using depth selective X-ray Absorption Spectroscopy before and after switching. The study of the local environment around Ag atoms in such devices reveals that Ag is in two very distinct environments with short Ag-S bonds due to Ag dissolved in the GeSx matrix, and longer Ag-Ag bonds related to an Ag metallic phase. These experiments allow the conclusion that the switching process involves the formation of metallic Ag nano-filaments initiated at the Ag electrode. All these experimental features are well supported by ab initio molecular dynamics simulations showing that Ag favorably bonds to S atoms, and permit the proposal of a model at the microscopic level that can explain the instability of the conductive state in these Ag-GeSx CBRAM devices. Finally, the principle of the nondestructive method described here can be extended to other types of resistive memory concepts.
Markov chain decision model for urinary incontinence procedures.
Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha
2017-03-13
Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.
Scaling random walks on arbitrary sets
NASA Astrophysics Data System (ADS)
Harris, Simon C.; Williams, David; Sibson, Robin
1999-01-01
Let I be a countably infinite set of points in [open face R] which we can write as I={ui: i[set membership][open face Z]}, with ui
Topological Charge Evolution in the Markov-Chain of QCD
Derek Leinweber; Anthony Williams; Jian-bo Zhang; Frank Lee
2004-04-01
The topological charge is studied on lattices of large physical volume and fine lattice spacing. We illustrate how a parity transformation on the SU(3) link-variables of lattice gauge configurations reverses the sign of the topological charge and leaves the action invariant. Random applications of the parity transformation are proposed to traverse from one topological charge sign to the other. The transformation provides an improved unbiased estimator of the ensemble average and is essential in improving the ergodicity of the Markov chain process.
Alomari, Yazan M; Sheikh Abdullah, Siti Norul Huda; MdZin, Reena Rahayu; Omar, Khairuddin
2015-01-01
Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with the k-means and fuzzy c-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved.
Alomari, Yazan M.; MdZin, Reena Rahayu
2015-01-01
Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with the k-means and fuzzy c-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved. PMID:25793010
Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph
2014-07-01
the generating function form of the Markov prop- agator of the random walk. This can be easily generalized to other models simply by specifying the...detailed information about the prop- agator than the bound on consensus. VI. CONCLUSIONS We have successfully derived exact solutions to the voter
The Autonomous Duck: Exploring the Possibilities of a Markov Chain Model in Animation
NASA Astrophysics Data System (ADS)
Villegas, Javier
This document reports the construction of a framework for the generation of animations based in a Markov chain model of the different poses of some drawn character. The model was implemented and is demonstrated with the animation of a virtual duck in a random walk. Some potential uses of this model in interpolation and generation of in between frames are also explored.
Appelt, Ane L.; Vogelius, Ivan R.; Pløen, John; Rafaelsen, Søren R.; Lindebjerg, Jan; Havelund, Birgitte M.; Bentzen, Søren M.; Jakobsen, Anders
2014-09-01
Purpose/Objective(s): Mature data on tumor control and survival are presented from a randomized trial of the addition of a brachytherapy boost to long-course neoadjuvant chemoradiation therapy (CRT) for locally advanced rectal cancer. Methods and Materials: Between March 2005 and November 2008, 248 patients with T3-4N0-2M0 rectal cancer were prospectively randomized to either long-course preoperative CRT (50.4 Gy in 28 fractions, per oral tegafur-uracil and L-leucovorin) alone or the same CRT schedule plus a brachytherapy boost (10 Gy in 2 fractions). The primary trial endpoint was pathologic complete response (pCR) at the time of surgery; secondary endpoints included overall survival (OS), progression-free survival (PFS), and freedom from locoregional failure. Results: Results for the primary endpoint have previously been reported. This analysis presents survival data for the 224 patients in the Danish part of the trial. In all, 221 patients (111 control arm, 110 brachytherapy boost arm) had data available for analysis, with a median follow-up time of 5.4 years. Despite a significant increase in tumor response at the time of surgery, no differences in 5-year OS (70.6% vs 63.6%, hazard ratio [HR] = 1.24, P=.34) and PFS (63.9% vs 52.0%, HR=1.22, P=.32) were observed. Freedom from locoregional failure at 5 years were 93.9% and 85.7% (HR=2.60, P=.06) in the standard and in the brachytherapy arms, respectively. There was no difference in the prevalence of stoma. Explorative analysis based on stratification for tumor regression grade and resection margin status indicated the presence of response migration. Conclusions: Despite increased pathologic tumor regression at the time of surgery, we observed no benefit on late outcome. Improved tumor regression does not necessarily lead to a relevant clinical benefit when the neoadjuvant treatment is followed by high-quality surgery.
Chatterjee, Dattatreyo; Ghosh, Sudip Kumar; Sen, Sukanta; Sarkar, Saswati; Hazra, Avijit; De, Radharaman
2016-01-01
Objective: Epidermal dermatophyte infections most commonly manifest as tinea corporis or tinea cruris. Topical azole antifungals are commonly used in their treatment but literature suggests that most require twice-daily application and provide lower cure rates than the allylamine antifungal terbinafine. We conducted a head-to-head comparison of the effectiveness of the once-daily topical azole, sertaconazole, with terbinafine in these infections. Materials and Methods: We conducted a randomized, observer-blind, parallel group study (Clinical Trial Registry India [CTRI]/2014/09/005029) with adult patients of either sex presenting with localized lesions. The clinical diagnosis was confirmed by potassium hydroxide smear microscopy of skin scrapings. After baseline assessment of erythema, scaling, and pruritus, patients applied either of the two study drugs once daily for 2 weeks. If clinical cure was not seen at 2 weeks, but improvement was noted, application was continued for further 2 weeks. Patients deemed to be clinical failure at 2 weeks were switched to oral antifungals. Results: Overall 88 patients on sertaconazole and 91 on terbinafine were analyzed. At 2 weeks, the clinical cure rates were comparable at 77.27% (95% confidence interval [CI]: 68.52%–86.03%) for sertaconazole and 73.63% (95% CI 64.57%–82.68%) for terbinafine (P = 0.606). Fourteen patients in either group improved and on further treatment showed complete healing by another 2 weeks. The final cure rate at 4 weeks was also comparable at 93.18% (95% CI 88.75%–97.62%) and 89.01% (95% CI 82.59%–95.44%), respectively (P = 0.914). At 2 weeks, 6 (6.82%) sertaconazole and 10 (10.99%) terbinafine recipients were considered as “clinical failure.” Tolerability of both preparations was excellent. Conclusion: Despite the limitations of an observer-blind study without microbiological support, the results suggest that once-daily topical sertaconazole is as effective as terbinafine in localized tinea
[A Hyperspectral Imagery Anomaly Detection Algorithm Based on Gauss-Markov Model].
Gao, Kun; Liu, Ying; Wang, Li-jing; Zhu, Zhen-yu; Cheng, Hao-bo
2015-10-01
With the development of spectral imaging technology, hyperspectral anomaly detection is getting more and more widely used in remote sensing imagery processing. The traditional RX anomaly detection algorithm neglects spatial correlation of images. Besides, it does not validly reduce the data dimension, which costs too much processing time and shows low validity on hyperspectral data. The hyperspectral images follow Gauss-Markov Random Field (GMRF) in space and spectral dimensions. The inverse matrix of covariance matrix is able to be directly calculated by building the Gauss-Markov parameters, which avoids the huge calculation of hyperspectral data. This paper proposes an improved RX anomaly detection algorithm based on three-dimensional GMRF. The hyperspectral imagery data is simulated with GMRF model, and the GMRF parameters are estimated with the Approximated Maximum Likelihood method. The detection operator is constructed with GMRF estimation parameters. The detecting pixel is considered as the centre in a local optimization window, which calls GMRF detecting window. The abnormal degree is calculated with mean vector and covariance inverse matrix, and the mean vector and covariance inverse matrix are calculated within the window. The image is detected pixel by pixel with the moving of GMRF window. The traditional RX detection algorithm, the regional hypothesis detection algorithm based on GMRF and the algorithm proposed in this paper are simulated with AVIRIS hyperspectral data. Simulation results show that the proposed anomaly detection method is able to improve the detection efficiency and reduce false alarm rate. We get the operation time statistics of the three algorithms in the same computer environment. The results show that the proposed algorithm improves the operation time by 45.2%, which shows good computing efficiency.
Baujat, Bertrand; Audry, Helene; Bourhis, Jean; Chan, Anthony T.C.; Onat, Haluk; Chua, Daniel T.T.; Kwong, Dora L.W.; Al-Sarraf, Muhyi; Chi, K.-H.; Hareyama, Masato; Leung, Sing F.; Thephamongkhol, Kullathorn; Pignon, Jean-Pierre . E-mail: jppignon@igr.fr
2006-01-01
Objectives: To study the effect of adding chemotherapy to radiotherapy (RT) on overall survival and event-free survival for patients with nasopharyngeal carcinoma. Methods and Materials: This meta-analysis used updated individual patient data from randomized trials comparing chemotherapy plus RT with RT alone in locally advanced nasopharyngeal carcinoma. The log-rank test, stratified by trial, was used for comparisons, and the hazard ratios of death and failure were calculated. Results: Eight trials with 1753 patients were included. One trial with a 2 x 2 design was counted twice in the analysis. The analysis included 11 comparisons using the data from 1975 patients. The median follow-up was 6 years. The pooled hazard ratio of death was 0.82 (95% confidence interval, 0.71-0.94; p = 0.006), corresponding to an absolute survival benefit of 6% at 5 years from the addition of chemotherapy (from 56% to 62%). The pooled hazard ratio of tumor failure or death was 0.76 (95% confidence interval, 0.67-0.86; p < 0.0001), corresponding to an absolute event-free survival benefit of 10% at 5 years from the addition of chemotherapy (from 42% to 52%). A significant interaction was observed between the timing of chemotherapy and overall survival (p = 0.005), explaining the heterogeneity observed in the treatment effect (p = 0.03), with the highest benefit resulting from concomitant chemotherapy. Conclusion: Chemotherapy led to a small, but significant, benefit for overall survival and event-free survival. This benefit was essentially observed when chemotherapy was administered concomitantly with RT.
Ma, Junxun; Yao, Sheng; Li, Xiao-Song; Kang, Huan-Rong; Yao, Fang-Fang; Du, Nan
2015-10-01
Locally advanced gastric cancer (LAGC) is best treated with surgical resection. Bevacizumab in combination with chemotherapy has shown promising results in treating advanced gastric cancer. This study aimed to investigate the efficacy of neoadjuvant chemotherapy using the docetaxel/oxaliplatin/5-FU (DOF) regimen and bevacizumab in LAGC patients.Eighty LAGC patients were randomized to receive DOF alone (n = 40) or DOF plus bevacizumab (n = 40) as neoadjuvant therapy before surgery. The lesions were evaluated at baseline and during treatment. Circulating tumor cells (CTCs) were counted using the FISH test. Patients were followed up for 3 years to analyze the disease-free survival (DFS) and overall survival (OS).The total response rate was significantly higher in the DOF plus bevacizumab group than the DOF group (65% vs 42.5%, P = 0.0436). The addition of bevacizumab significantly increased the surgical resection rate and the R0 resection rate (P < 0.05). The DOF plus bevacizumab group showed significantly greater reduction in CTC counts after neoadjuvant therapy in comparison with the DOF group (P = 0.0335). Although the DOF plus bevacizumab group had significantly improved DFS than the DOF group (15.2 months vs 12.3 months, P = 0.013), the 2 groups did not differ significantly in OS (17.6 ± 1.8 months vs 16.4 ± 1.9 months, P = 0.776. Cox proportional model analysis showed that number of metastatic lymph nodes, CTC reduction, R0 resection, and neoadjuvant therapy are independent prognostic factors for patients with LAGC.Neoadjuvant of DOF regimen plus bevacizumab can improve the R0 resection rate and DFS in LAGC. These beneficial effects might be associated with the reduction in CTC counts.
Searching for convergence in phylogenetic Markov chain Monte Carlo.
Beiko, Robert G; Keith, Jonathan M; Harlow, Timothy J; Ragan, Mark A
2006-08-01
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a "metachain" to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely.
Statistics and generation of non-Markov phase screens
NASA Astrophysics Data System (ADS)
Charnotskii, Mikhail; Baker, Gary
2016-09-01
Statistics of the random phase screens used for the modeling of beam propagation and imaging through the turbulent atmosphere is currently based on the Markov Approximation (MA) for wave propagation. This includes the phase structure functions of individual screens and the use of the statistically-independent screens for the multi-screen splitstep simulation of wave propagation. As the propagation modeling progresses to address the deep turbulence conditions, the increased number of phase screens is required to accurately describe the multiple scattering. This makes the MA a critical limitation, both because phase statistic of the thin turbulent layer does not follow MA, and because the closely space screens cannot be considered as statistically and functionally independent. A recently introduced Sparse-Spectrum (SS) model of statistically homogeneous random fields makes it possible to generate 3-D samples of refractive-index fluctuations with prescribed spectral density at a very reasonable computational cost. This leads to generation of samples of the phase screen sets that are free from the limitations of the MA. We investigated statistics of the individual phase screens and cross-correlations between the pairs of phase screens and found that the thickness Δz of the turbulent layer replaced by the phase screen is a new parameter defining the phase statistics in the non-Markov case. SS-based numerical algorithms for generation of the 3-D samples of the turbulent refractive index, and for the phase screen sets are presented. We also compare the split-step simulation results for the traditional MA and non-Markov screens.
Wolf, Thomas Gerhard; Wolf, Dominik; Callaway, Angelika; Below, Dagna; d'Hoedt, Bernd; Willershausen, Brita; Daubländer, Monika
2016-01-01
This prospective randomized clinical crossover trial was designed to compare hypnosis and local anesthesia for experimental dental pain relief. Pain thresholds of the dental pulp were determined. A targeted standardized pain stimulus was applied and rated on the Visual Analogue Scale (0-10). The pain threshold was lower under hypnosis (58.3 ± 17.3, p < .001), maximal (80.0) under local anesthesia. The pain stimulus was scored higher under hypnosis (3.9 ± 3.8) than with local anesthesia (0.0, p < .001). Local anesthesia was superior to hypnosis and is a safe and effective method for pain relief in dentistry. Hypnosis seems to produce similar effects observed under sedation. It can be used in addition to local anesthesia and in individual cases as an alternative for pain control in dentistry.
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Application of Markov Graphs in Marketing
NASA Astrophysics Data System (ADS)
Bešić, C.; Sajfert, Z.; Đorđević, D.; Sajfert, V.
2007-04-01
The applications of Markov's processes theory in marketing are discussed. It was turned out that Markov's processes have wide field of applications. The advancement of marketing by the use of convolution of stationary Markov's distributions is analysed. It turned out that convolution distribution gives average net profit that is two times higher than the one obtained by usual Markov's distribution. It can be achieved if one selling chain is divided onto two parts with different ratios of output and input frequencies. The stability of marketing system was examined by the use of conforming coefficients. It was shown, by means of Jensen inequality that system remains stable if initial capital is higher than averaged losses.
NASA Astrophysics Data System (ADS)
Ma, Jie; Wang, Lin-Wang
2015-03-01
Perovskite-based solar cells have achieved high solar-energy conversion efficiencies and attracted wide attentions nowadays. Despite the rapid progress in solar-cell devices, many fundamental issues of the hybrid perovskites have not been fully understood. Experimentally, it is well known that in CH3NH3PbI3, the organic molecules CH3NH3 are randomly orientated at the room temperature, but the impact of the random molecular orientation has not been investigated. Using linear-scaling ab-initiomethods, we have calculated the electronic structures of the tetragonal phase of CH3NH3PbI3 with randomly orientated organic molecules in large supercells up to ~20,000 atoms. Due to the dipole moment of the organic molecule, the random orientation creates a novel system with long-range potential fluctuations unlike alloys or other conventional disordered systems. We find that the charge densities of the conduction-band minimum and the valence-band maximum are localized separately in nanoscales due to the potential fluctuations. The charge localization causes electron-hole separation and reduces carrier recombination rates, which may contribute to the long carrier lifetime observed in experiments. We have also proposed a model to explain the charge localization.
Semi-Markov Unreliability Range Evaluator
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Semi-Markov Unreliability Range Evaluator, SURE, computer program is software tool for analysis of reconfigurable, fault-tolerant systems. Traditional reliability analyses based on aggregates of fault-handling and fault-occurrence models. SURE provides efficient means for calculating accurate upper and lower bounds for probabilities of death states for large class of semi-Markov mathematical models, and not merely those reduced to critical-pair architectures.
Exact significance test for Markov order
NASA Astrophysics Data System (ADS)
Pethel, S. D.; Hahs, D. W.
2014-02-01
We describe an exact significance test of the null hypothesis that a Markov chain is nth order. The procedure utilizes surrogate data to yield an exact test statistic distribution valid for any sample size. Surrogate data are generated using a novel algorithm that guarantees, per shot, a uniform sampling from the set of sequences that exactly match the nth order properties of the observed data. Using the test, the Markov order of Tel Aviv rainfall data is examined.
NASA Astrophysics Data System (ADS)
Olivares, G.; Teferle, F. N.
2013-12-01
Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Mell, Loren K. . E-mail: lmell@radonc.uchicago.edu; Malik, Renuka; Komaki, Ritsuko; Movsas, Benjamin; Swann, R. Suzanne; Langer, Corey; Antonadou, Dosia; Koukourakis, Michael
2007-05-01
Purpose: Amifostine can reduce the cytotoxic effects of chemotherapy and radiotherapy in patients with locally advanced non-small-cell lung cancer, but concerns remain regarding its possible tumor-protective effects. Studies with sufficient statistical power to address this question are lacking. Methods and Materials: We performed a meta-analysis of all published clinical trials involving locally advanced non-small-cell lung cancer patients treated with radiotherapy with or without chemotherapy, who had been randomized to treatment with amifostine vs. no amifostine or placebo. Random effects estimates of the relative risk of overall, partial, and complete response were obtained. Results: Seven randomized trials involving 601 patients were identified. Response rate data were available for six studies (552 patients). The pooled relative risk (RR) estimate was 1.07 (95% confidence interval, 0.97-1.18; p = 0.18), 1.21 (95% confidence interval, 0.83-1.78; p = 0.33), and 0.99 (95% confidence interval, 0.78-1.26; p = 0.95) for overall, complete, and partial response, respectively (a RR >1 indicates improvement in response with amifostine compared with the control arm). The results were similar after sensitivity analyses. No evidence was found of treatment effect heterogeneity across the studies. Conclusions: Amifostine has no effect on tumor response in patients with locally advanced non-small-cell lung cancer treated with radiotherapy with or without chemotherapy.
[Decision analysis in radiology using Markov models].
Golder, W
2000-01-01
Markov models (Multistate transition models) are mathematical tools to simulate a cohort of individuals followed over time to assess the prognosis resulting from different strategies. They are applied on the assumption that persons are in one of a finite number of states of health (Markov states). Each condition is given a transition probability as well as an incremental value. Probabilities may be chosen constant or varying over time due to predefined rules. Time horizon is divided into equal increments (Markov cycles). The model calculates quality-adjusted life expectancy employing real-life units and values and summing up the length of time spent in each health state adjusted for objective outcomes and subjective appraisal. This sort of modeling prognosis for a given patient is analogous to utility in common decision trees. Markov models can be evaluated by matrix algebra, probabilistic cohort simulation and Monte Carlo simulation. They have been applied to assess the relative benefits and risks of a limited number of diagnostic and therapeutic procedures in radiology. More interventions should be submitted to Markov analyses in order to elucidate their cost-effectiveness.
Tu, Ching-Ting; Chan, Yu-Hsien; Chen, Yi-Chung
2016-05-20
A facial sketch synthesis system is proposed featuring a two-dimensional direct combined model (2DDCM)-based facespecific Markov network. In contrast to existing facial sketch synthesis systems, the proposed scheme aims to synthesize sketches which reproduce the unique drawing style of a particular artist, where this drawing style is learned from a dataset consisting of a large number of image/sketch pairwise training samples. The synthesis system comprises three modules, namely a global module, a local module, and an enhancement module. The global module applies a 2DDCM approach to synthesize the global facial geometry and texture of the input image. The detailed texture is then added to the synthesized sketch in a local patch-based manner using a parametric 2DDCM model and a non-parametric Markov random field (MRF) network. Notably, the MRF approach gives the synthesized results an appearance more consistent with the drawing style of the training samples, while the 2DDCM approach enables the synthesis of outcomes with a more derivative style. As a result, the similarity between the synthesized sketches and the input images is greatly improved. Finally, a post-processing operation is performed to enhance the shadowed regions of the synthesized image by adding strong lines or curves to emphasize the lighting conditions. The experimental results confirm that the synthesized facial images are in good qualitative and quantitative agreement with the input images as well as the ground-truth sketches provided by the same artist. The representing power of the proposed framework is demonstrated by synthesizing facial sketches from input images with a wide variety of facial poses, lighting conditions, and races even when such images are not included in the training dataset. Moreover, the practical applicability of the proposed framework is demonstrated by means of automatic facial recognition tests.
NASA Astrophysics Data System (ADS)
Tanikawa, Seiya; Kino, Hisashi; Fukushima, Takafumi; Koyanagi, Mitsumasa; Tanaka, Tetsu
2016-04-01
As three-dimensional (3D) ICs have many advantages, IC performances can be enhanced without scaling down of transistor size. However, 3D IC has mechanical stresses inside Si substrates owing to its 3D stacking structure, which induces negative effects on transistor performances such as carrier mobility changes. One of the mechanical stresses is local bending stress due to organic adhesive shrinkage among stacked IC chips. In this paper, we have proposed an evaluation method for in-plane local stress distribution in the stacked IC chips using retention time modulation of a dynamic random access memory (DRAM) cell array. We fabricated a test structure composed of a DRAM chip bonded on a Si interposer with dummy Cu/Sn microbumps. As a result, we clarified that the DRAM cell array can precisely evaluate the in-plane local stress distribution in the stacked IC chips.
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
Validity of the Markov approximation in ocean acoustics.
Henyey, Frank S; Ewart, Terry E
2006-01-01
Moment equations and path integrals for wave propagation in random media have been applied to many ocean acoustics problems. Both these techniques make use of the Markov approximation. The expansion parameter, which must be less than one for the Markov approximation to be valid, is the subject of this paper. There is a standard parameter (the Kubo number) which various authors have shown to be sufficient. Fourth moment equations have been successfully used to predict the experimentally measured frequency spectrum of intensity in the mid-ocean acoustic transmission experiment (MATE). Yet, in spite of this success, the Kubo number is greater than 1 for the measured index of refraction variability for MATE, arriving at a contradiction. Here, that contradiction is resolved by showing that the Kubo parameter is far too pessimistic for the ocean case. Using the methodology of van Kampen, another parameter is found which appears to be both necessary and sufficient, and is much smaller than the Kubo number when phase fluctuations are dominated by large scales in the medium. This parameter is shown to be small for the experimental regime of MATE, justifying the applications of the moment equations to that experiment.
Markov chains at the interface of combinatorics, computing, and statistical physics
NASA Astrophysics Data System (ADS)
Streib, Amanda Pascoe
The fields of statistical physics, discrete probability, combinatorics, and theoretical computer science have converged around efforts to understand random structures and algorithms. Recent activity in the interface of these fields has enabled tremendous breakthroughs in each domain and has supplied a new set of techniques for researchers approaching related problems. This thesis makes progress on several problems in this interface whose solutions all build on insights from multiple disciplinary perspectives. First, we consider a dynamic growth process arising in the context of DNA-based self-assembly. The assembly process can be modeled as a simple Markov chain. We prove that the chain is rapidly mixing for large enough bias in regions of Zd. The proof uses a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile, which arises in the nanotechnology application. Moreover, we use intuition from statistical physics to construct a choice of the biases for which the Markov chain Mmon requires exponential time to converge. Second, we consider a related problem regarding the convergence rate of biased permutations that arises in the context of self-organizing lists. The Markov chain Mnn in this case is a nearest-neighbor chain that allows adjacent transpositions, and the rate of these exchanges is governed by various input parameters. It was conjectured that the chain is always rapidly mixing when the inversion probabilities are positively biased, i.e., we put nearest neighbor pair x < y in order with bias 1/2 ≤ pxy ≤ 1 and out of order with bias 1 - pxy. The Markov chain Mmon was known to have connections to a simplified version of this biased card-shuffling. We provide new connections between Mnn and Mmon by using simple combinatorial bijections, and we prove that Mnn is
Protein family classification using sparse Markov transducers.
Eskin, E; Grundy, W N; Singer, Y
2000-01-01
In this paper we present a method for classifying proteins into families using sparse Markov transducers (SMTs). Sparse Markov transducers, similar to probabilistic suffix trees, estimate a probability distribution conditioned on an input sequence. SMTs generalize probabilistic suffix trees by allowing for wild-cards in the conditioning sequences. Because substitutions of amino acids are common in protein families, incorporating wildcards into the model significantly improves classification performance. We present two models for building protein family classifiers using SMTs. We also present efficient data structures to improve the memory usage of the models. We evaluate SMTs by building protein family classifiers using the Pfam database and compare our results to previously published results.
Entropy production fluctuations of finite Markov chains
NASA Astrophysics Data System (ADS)
Jiang, Da-Quan; Qian, Min; Zhang, Fu-Xi
2003-09-01
For almost every trajectory segment over a finite time span of a finite Markov chain with any given initial distribution, the logarithm of the ratio of its probability to that of its time-reversal converges exponentially to the entropy production rate of the Markov chain. The large deviation rate function has a symmetry of Gallavotti-Cohen type, which is called the fluctuation theorem. Moreover, similar symmetries also hold for the rate functions of the joint distributions of general observables and the logarithmic probability ratio.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Parallel Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ren, Ruichao; Orkoulas, G.
2007-06-01
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Quantification of heart rate variability by discrete nonstationary non-Markov stochastic processes
NASA Astrophysics Data System (ADS)
Yulmetyev, Renat; Hänggi, Peter; Gafarov, Fail
2002-04-01
We develop the statistical theory of discrete nonstationary non-Markov random processes in complex systems. The objective of this paper is to find the chain of finite-difference non-Markov kinetic equations for time correlation functions (TCF) in terms of nonstationary effects. The developed theory starts from careful analysis of time correlation through nonstationary dynamics of vectors of initial and final states and nonstationary normalized TCF. Using the projection operators technique we find the chain of finite-difference non-Markov kinetic equations for discrete nonstationary TCF and for the set of nonstationary discrete memory functions (MF's). The last one contains supplementary information about nonstationary properties of the complex system on the whole. Another relevant result of our theory is the construction of the set of dynamic parameters of nonstationarity, which contains some information of the nonstationarity effects. The full set of dynamic, spectral and kinetic parameters, and kinetic functions (TCF, short MF's statistical spectra of non-Markovity parameter, and statistical spectra of nonstationarity parameter) has made it possible to acquire the in-depth information about discreteness, non-Markov effects, long-range memory, and nonstationarity of the underlying processes. The developed theory is applied to analyze the long-time (Holter) series of RR intervals of human ECG's. We had two groups of patients: the healthy ones and the patients after myocardial infarction. In both groups we observed effects of fractality, standard and restricted self-organized criticality, and also a certain specific arrangement of spectral lines. The received results demonstrate that the power spectra of all orders (n=1,2,...) MF mn(t) exhibit the neatly expressed fractal features. We have found out that the full sets of non-Markov, discrete and nonstationary parameters can serve as reliable and powerful means of diagnosis of the cardiovascular system states and can
PULSAR STATE SWITCHING FROM MARKOV TRANSITIONS AND STOCHASTIC RESONANCE
Cordes, J. M.
2013-09-20
Markov processes are shown to be consistent with metastable states seen in pulsar phenomena, including intensity nulling, pulse-shape mode changes, subpulse drift rates, spin-down rates, and X-ray emission, based on the typically broad and monotonic distributions of state lifetimes. Markovianity implies a nonlinear magnetospheric system in which state changes occur stochastically, corresponding to transitions between local minima in an effective potential. State durations (though not transition times) are thus largely decoupled from the characteristic timescales of various magnetospheric processes. Dyadic states are common but some objects show at least four states with some transitions forbidden. Another case is the long-term intermittent pulsar B1931+24 that has binary radio-emission and torque states with wide, but non-monotonic duration distributions. It also shows a quasi-period of 38 ± 5 days in a 13 yr time sequence, suggesting stochastic resonance in a Markov system with a forcing function that could be strictly periodic or quasi-periodic. Nonlinear phenomena are associated with time-dependent activity in the acceleration region near each magnetic polar cap. The polar-cap diode is altered by feedback from the outer magnetosphere and by return currents from the equatorial region outside the light cylinder that may also cause the neutron star to episodically charge and discharge. Orbital perturbations of a disk or current sheet provide a natural periodicity for the forcing function in the stochastic-resonance interpretation of B1931+24. Disk dynamics may introduce additional timescales in observed phenomena. Future work can test the Markov interpretation, identify which pulsar types have a propensity for state changes, and clarify the role of selection effects.
Shahrokh-Tehraninejad, Ensieh; Dashti, Minoo; Hossein-Rashidi, Batool; Azimi-Nekoo, Elham; Haghollahi, Fedyeh; Kalantari, Vahid
2016-01-01
Objective: Repeated implantation failure (RIF) is a condition in which the embryos implantation decreases in the endometrium. So, our aim was to evaluate the effect of local endometrial injury on embryo transfer results. Materials and methods: In this simple randomized clinical trial (RCT), a total of 120 patients were selected. The participants were less than 40 years old, and they are in their minimum two cycles of vitro fertilization (IVF). Patients were divided randomly into two groups of LEI (Local endometrial injury) and a control group (n = 60 in each group). The first group had four small endometrial injuries from anterior, posterior, and lateral uterus walls which were obtained from people who were in 21th day of their previous IVF cycle. The second group was the patients who have not received any intervention. Results: The experimental and control patients were matched in the following factors. Regarding the clinical pregnancy rate, there was no significant difference noted between the experimental and the control group. Conclusion: Local endometrial injury in a preceding cycle does not increase the clinical pregnancy rate in the subsequent FET cycle of patients with repeated implantation failure. PMID:28101111
Taplin, Mary-Ellen; Montgomery, Bruce; Logothetis, Christopher J.; Bubley, Glenn J.; Richie, Jerome P.; Dalkin, Bruce L.; Sanda, Martin G.; Davis, John W.; Loda, Massimo; True, Lawrence D.; Troncoso, Patricia; Ye, Huihui; Lis, Rosina T.; Marck, Brett T.; Matsumoto, Alvin M.; Balk, Steven P.; Mostaghel, Elahe A.; Penning, Trevor M.; Nelson, Peter S.; Xie, Wanling; Jiang, Zhenyang; Haqq, Christopher M.; Tamae, Daniel; Tran, NamPhuong; Peng, Weimin; Kheoh, Thian; Molina, Arturo; Kantoff, Philip W.
2014-01-01
Purpose Cure rates for localized high-risk prostate cancers (PCa) and some intermediate-risk PCa are frequently suboptimal with local therapy. Outcomes are improved by concomitant androgen-deprivation therapy (ADT) with radiation therapy, but not by concomitant ADT with surgery. Luteinizing hormone–releasing hormone agonist (LHRHa; leuprolide acetate) does not reduce serum androgens as effectively as abiraterone acetate (AA), a prodrug of abiraterone, a CYP17 inhibitor that lowers serum testosterone (< 1 ng/dL) and improves survival in metastatic PCa. The possibility that greater androgen suppression in patients with localized high-risk PCa will result in improved clinical outcomes makes paramount the reassessment of neoadjuvant ADT with more robust androgen suppression. Patients and Methods A neoadjuvant randomized phase II trial of LHRHa with AA was conducted in patients with localized high-risk PCa (N = 58). For the first 12 weeks, patients were randomly assigned to LHRHa versus LHRHa plus AA. After a research prostate biopsy, all patients received 12 additional weeks of LHRHa plus AA followed by prostatectomy. Results The levels of intraprostatic androgens from 12-week prostate biopsies, including the primary end point (dihydrotestosterone/testosterone), were significantly lower (dehydroepiandrosterone, Δ4-androstene-3,17-dione, dihydrotestosterone, all P < .001; testosterone, P < .05) with LHRHa plus AA compared with LHRHa alone. Prostatectomy pathologic staging demonstrated a low incidence of complete responses and minimal residual disease, with residual T3- or lymph node–positive disease in the majority. Conclusion LHRHa plus AA treatment suppresses tissue androgens more effectively than LHRHa alone. Intensive intratumoral androgen suppression with LHRHa plus AA before prostatectomy for localized high-risk PCa may reduce tumor burden. PMID:25311217
NASA Astrophysics Data System (ADS)
Sato, Haruo; Fehler, Mike; Saito, Tatsuhiko
2004-06-01
Wave trains in high-frequency seismograms of local earthquakes are mostly composed of incoherent waves that are scattered by distributed heterogeneities within the lithosphere. Their phase variations are very complex; however, their wave envelopes are systematic, frequency-dependent, and vary regionally. Stochastic approaches are superior to deterministic wave-theoretical approaches for modeling wave envelopes in random media. The time width of a wavelet is broadened with increasing travel distance mostly because of diffraction caused by the long-wavelength components of random velocity inhomogeneity. The Markov approximation for the parabolic wave equation is effective for the synthesis of envelopes for random media whose spectra are poor in short-wavelength components; however, we have to consider the contribution of large-angle nonisotropic scattering if the random media are rich in short-wavelength inhomogeneities. Multiple nonisotropic scattering can be reliably modeled as isotropic scattering by using an effective isotropic scattering coefficient given by the momentum transfer scattering coefficient, which is a reciprocal of the transport mean free path. It is mostly controlled by the short-wavelength spectra of random media. We propose a hybrid method for the synthesis of whole wave envelopes that uses the envelope derived from the Markov approximation as a propagator in the radiative transfer integral equation for isotropic scattering. The envelopes resulting from the hybrid method agree well with ensemble average envelopes calculated by averaging envelopes from individual finite difference simulations of the wave equation for a suite of random media.
Qualitative Analysis of Partially-Observable Markov Decision Processes
NASA Astrophysics Data System (ADS)
Chatterjee, Krishnendu; Doyen, Laurent; Henzinger, Thomas A.
We study observation-based strategies for partially-observable Markov decision processes (POMDPs) with parity objectives. An observation-based strategy relies on partial information about the history of a play, namely, on the past sequence of observations. We consider qualitative analysis problems: given a POMDP with a parity objective, decide whether there exists an observation-based strategy to achieve the objective with probability 1 (almost-sure winning), or with positive probability (positive winning). Our main results are twofold. First, we present a complete picture of the computational complexity of the qualitative analysis problem for POMDPs with parity objectives and its subclasses: safety, reachability, Büchi, and coBüchi objectives. We establish several upper and lower bounds that were not known in the literature. Second, we give optimal bounds (matching upper and lower bounds) for the memory required by pure and randomized observation-based strategies for each class of objectives.
Predicting the Kinetics of RNA Oligonucleotides Using Markov State Models.
Pinamonti, Giovanni; Zhao, Jianbo; Condon, David E; Paul, Fabian; Noè, Frank; Turner, Douglas H; Bussi, Giovanni
2017-02-14
Nowadays different experimental techniques, such as single molecule or relaxation experiments, can provide dynamic properties of biomolecular systems, but the amount of detail obtainable with these methods is often limited in terms of time or spatial resolution. Here we use state-of-the-art computational techniques, namely, atomistic molecular dynamics and Markov state models, to provide insight into the rapid dynamics of short RNA oligonucleotides, to elucidate the kinetics of stacking interactions. Analysis of multiple microsecond-long simulations indicates that the main relaxation modes of such molecules can consist of transitions between alternative folded states, rather than between random coils and native structures. After properly removing structures that are artificially stabilized by known inaccuracies of the current RNA AMBER force field, the kinetic properties predicted are consistent with the time scales of previously reported relaxation experiments.
Α Markov model for longitudinal studies with incomplete dichotomous outcomes.
Efthimiou, Orestis; Welton, Nicky; Samara, Myrto; Leucht, Stefan; Salanti, Georgia
2017-03-01
Missing outcome data constitute a serious threat to the validity and precision of inferences from randomized controlled trials. In this paper, we propose the use of a multistate Markov model for the analysis of incomplete individual patient data for a dichotomous outcome reported over a period of time. The model accounts for patients dropping out of the study and also for patients relapsing. The time of each observation is accounted for, and the model allows the estimation of time-dependent relative treatment effects. We apply our methods to data from a study comparing the effectiveness of 2 pharmacological treatments for schizophrenia. The model jointly estimates the relative efficacy and the dropout rate and also allows for a wide range of clinically interesting inferences to be made. Assumptions about the missingness mechanism and the unobserved outcomes of patients dropping out can be incorporated into the analysis. The presented method constitutes a viable candidate for analyzing longitudinal, incomplete binary data.
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Α Markov model for longitudinal studies with incomplete dichotomous outcomes
Welton, Nicky; Samara, Myrto; Leucht, Stefan; Salanti, Georgia
2016-01-01
Missing outcome data constitute a serious threat to the validity and precision of inferences from randomized controlled trials. In this paper, we propose the use of a multistate Markov model for the analysis of incomplete individual patient data for a dichotomous outcome reported over a period of time. The model accounts for patients dropping out of the study and also for patients relapsing. The time of each observation is accounted for, and the model allows the estimation of time‐dependent relative treatment effects. We apply our methods to data from a study comparing the effectiveness of 2 pharmacological treatments for schizophrenia. The model jointly estimates the relative efficacy and the dropout rate and also allows for a wide range of clinically interesting inferences to be made. Assumptions about the missingness mechanism and the unobserved outcomes of patients dropping out can be incorporated into the analysis. The presented method constitutes a viable candidate for analyzing longitudinal, incomplete binary data. PMID:27917593
2014-01-01
Background In Europe, gastric cancer remains diagnosed at advanced stage (serosal and/or lymph node involvement). Despite curative management combining perioperative systemic chemotherapy and gastrectomy with D1-D2 lymph node dissection, 5-year survival rates of T3 and/or N + patients remain under 30%. More than 50% of recurrences are peritoneal and/or locoregional. The use of adjuvant hyperthermic intraperitoneal chemotherapy that eliminates free cancer cells that can be released into peritoneal cavity during the gastrectomy and prevents peritoneal carcinomatosis recurrences, was extensively evaluated by several randomized trials conducted in Asia. Two meta-analysis reported that adjuvant hyperthermic intraperitoneal chemotherapy significantly reduces the peritoneal recurrences and significantly improves the overall survival. As it was previously done for the evaluation of the extension of lymph node dissection, it seems very important to validate on European or caucasian patients the results observed in trials performed in Asia. Methods/design GASTRICHIP is a prospective, open, randomized multicenter phase III clinical study with two arms that aims to evaluate the effects of hyperthermic intraperitoneal chemotherapy with oxaliplatin on patients with gastric cancer involving the serosa and/or lymph node involvement and/or with positive cytology at peritoneal washing, treated with perioperative systemic chemotherapy and D1-D2 curative gastrectomy. Peroperatively, at the end of curative surgery, patients will be randomized after preoperatively written consent has been given for participation. Primary endpoint will be overall survival from the date of surgery to the date of death or to the end of follow-up (5 years). Secondary endpoint will be 3- and 5-year recurrence-free survival, site of recurrence, morbidity, and quality of life. An ancillary study will compare the incidence of positive peritoneal cytology pre- and post-gastrectomy in two arms of the study
Frank, Regine; Lubatsch, Andreas
2011-07-15
We present a detailed discussion of scalar wave propagation and light intensity transport in three-dimensional random dielectric media with optical gain. The intrinsic length and time scales of such amplifying systems are studied and comprehensively discussed as well as the threshold characteristics of single- and two-particle propagators. Our semianalytical theory is based on a self-consistent Cooperon resummation, representing the repeated self-interference, and incorporates as well optical gain and absorption, modeled in a semianalytical way by a finite imaginary part of the dielectric function. Energy conservation in terms of a generalized Ward identity is taken into account.
Semi-Markov Unreliability Range Evaluator (SURE)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1989-01-01
Analysis tool for reconfigurable, fault-tolerant systems, SURE provides efficient way to calculate accurate upper and lower bounds for death state probabilities for large class of semi-Markov models. Calculated bounds close enough for use in reliability studies of ultrareliable computer systems. Written in PASCAL for interactive execution and runs on DEC VAX computer under VMS.
Evaluation of Usability Utilizing Markov Models
ERIC Educational Resources Information Center
Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane
2012-01-01
Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…
Markov Chain Estimation of Avian Seasonal Fecundity
To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...
Brown, Joe; Sobsey, Mark D; Loomis, Dana
2008-09-01
A randomized, controlled intervention trial of two household-scale drinking water filters was conducted in a rural village in Cambodia. After collecting four weeks of baseline data on household water quality, diarrheal disease, and other data related to water use and handling practices, households were randomly assigned to one of three groups of 60 households: those receiving a ceramic water purifier (CWP), those receiving a second filter employing an iron-rich ceramic (CWP-Fe), and a control group receiving no intervention. Households were followed for 18 weeks post-baseline with biweekly follow-up. Households using either filter reported significantly less diarrheal disease during the study compared with a control group of households without filters as indicated by longitudinal prevalence ratios CWP: 0.51 (95% confidence interval [CI]: 0.41-0.63); CWP-Fe: 0.58 (95% CI: 0.47-0.71), an effect that was observed in all age groups and both sexes after controlling for clustering within households and within individuals over time.
Phase-Type Approximations for Wear Processes in A Semi-Markov Environment
2004-03-01
identically distributed exponential random variables, is equivalent to the absorption time of an underlying k-state Markov process. As noted by Perros ...the Coxian distribution is that it can exactly represent any distribution having a rational Laplace transform [23]. Moreover, Perros [23] gives the...Performance Evaluation (TOOLS 2003), 200-217. 23. Perros , H. (1994). Queueing Networks with Blocking. Oxford University Press, New York. 24. Ro, C.W
Liu, An-An; Li, Kang; Kanade, Takeo
2012-02-01
We propose a semi-Markov model trained in a max-margin learning framework for mitosis event segmentation in large-scale time-lapse phase contrast microscopy image sequences of stem cell populations. Our method consists of three steps. First, we apply a constrained optimization based microscopy image segmentation method that exploits phase contrast optics to extract candidate subsequences in the input image sequence that contains mitosis events. Then, we apply a max-margin hidden conditional random field (MM-HCRF) classifier learned from human-annotated mitotic and nonmitotic sequences to classify each candidate subsequence as a mitosis or not. Finally, a max-margin semi-Markov model (MM-SMM) trained on manually-segmented mitotic sequences is utilized to reinforce the mitosis classification results, and to further segment each mitosis into four predefined temporal stages. The proposed method outperforms the event-detection CRF model recently reported by Huh as well as several other competing methods in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells. For mitosis detection, an overall precision of 95.8% and a recall of 88.1% were achieved. For mitosis segmentation, the mean and standard deviation for the localization errors of the start and end points of all mitosis stages were well below 1 and 2 frames, respectively. In particular, an overall temporal location error of 0.73 ± 1.29 frames was achieved for locating daughter cell birth events.
Managutti, Anil; Prakasam, Michael; Puthanakar, Nagraj; Menat, Shailesh; Shah, Disha; Patel, Harsh
2015-01-01
Background: Local anesthetic agents are more commonly used in dentistry to have painless procedure during surgical intervention in bone and soft tissue. There are many local anesthetic agents available with the wide selection of vaso-constrictive agents that improve the clinical efficacy and the duration of local anesthesia. Most commonly lignocaine with adrenaline is used in various concentrations. Systemically adrenaline like drugs can cause a number of cardiovascular disturbances while most are short lived, permanent injury or even death may follow in drug induced ventricular fibrillation, myocardial infarction or cerebro-vascular accidents. This study compared the efficacy and cardiovascular effects with the use of 2% lignocaine with two different concentrations. Materials and Methods: Forty patients underwent extractions of mandibular bilateral teeth using 2% lignocaine with two different concentrations - one with 1:80000 and the other with 1:200000. Results: There was no significant difference in the efficacy and duration with the 2% lignocaine with 2 different concentrations. 2% lignocaine with 1:80000 adrenaline concentration has significantly increased the heart rate and blood pressure especially systolic compared with the lignocaine with 1:200000. Conclusion: Though 2% lignocaine with 1:80000 is widely used in India, 1:200000 adrenaline concentrations do not much affect the cardiovascular parameters. So it is recommended to use 2% lignocaine with 1:200000 for cardiac patients. PMID:25878474
OʼToole, Robert V; Joshi, Manjari; Carlini, Anthony R; Murray, Clinton K; Allen, Lauren E; Scharfstein, Daniel O; Gary, Joshua L; Bosse, Michael J; Castillo, Renan C
2017-04-01
A number of clinical studies in the spine literature suggest that the use of local vancomycin powder may substantially reduce surgical site infections (SSIs). These studies are primarily retrospective and observational and few focus on orthopaedic trauma patients. This study is a phase III, prospective, randomized, clinical trial to assess the efficacy of locally administered vancomycin powder in the prevention of SSI after fracture surgery. The primary goal of the VANCO Study is to compare the proportion of deep SSI 6 months after fracture fixation surgery. A secondary objective is to compare species and antibacterial susceptibilities among study patients who develop SSI. An additional objective is to build and validate a risk prediction model for the development of SSI. The study population consists of patients aged 18-80 years with tibial plateau or pilon (tibial plafond) fractures, at higher risk of infection, and definitively treated with plate and screw fixation. Participants are block randomized (within center) in a 1:1 ratio to either treatment group (local vancomycin powder up to a maximum dose of 1000 mg, placed immediately before wound closure) or control group (standard of care) for each study injury location, and return to the clinic for evaluations at 2 weeks, 3 months, and 6 months after fixation. The targeted sample size for the study is 500 fractures per study arm. This study should provide important information regarding the use of local vancomycin powder during the definitive treatment of lower extremity fractures and has the potential to significantly reduce the incidence of infection after orthopaedic trauma.
Yu, Elaine; Monaco, James P; Tomaszewski, John; Shih, Natalie; Feldman, Michael; Madabhushi, Anant
2011-01-01
In this paper we present a system for detecting regions of carcinoma of the prostate (CaP) in H&E stained radical prostatectomy specimens using the color fractal dimension. Color textural information is known to be a valuable characteristic to distinguish CaP from benign tissue. In addition to color information, we know that cancer tends to form contiguous regions. Our system leverages the color staining information of histology as well as spatial dependencies. The color and textural information is first captured using color fractal dimension. To incorporate spatial dependencies, we combine the probability map constructed via color fractal dimension with a novel Markov prior called the Probabilistic Pairwise Markov Model (PPMM). To demonstrate the capability of this CaP detection system, we applied the algorithm to 27 radical prostatectomy specimens from 10 patients. A per pixel evaluation was conducted with ground truth provided by an expert pathologist using only the color fractal feature first, yielding an area under the receiver operator characteristic curve (AUC) curve of 0.790. In conjunction with a Markov prior, the resultant color fractal dimension + Markov random field (MRF) classifier yielded an AUC of 0.831.
Inferring species interactions from co-occurrence data with Markov networks.
Harris, David J
2016-12-01
Inferring species interactions from co-occurrence data is one of the most controversial tasks in community ecology. One difficulty is that a single pairwise interaction can ripple through an ecological network and produce surprising indirect consequences. For example, the negative correlation between two competing species can be reversed in the presence of a third species that outcompetes both of them. Here, I apply models from statistical physics, called Markov networks or Markov random fields, that can predict the direct and indirect consequences of any possible species interaction matrix. Interactions in these models can be estimated from observed co-occurrence rates via maximum likelihood, controlling for indirect effects. Using simulated landscapes with known interactions, I evaluated Markov networks and six existing approaches. Markov networks consistently outperformed the other methods, correctly isolating direct interactions between species pairs even when indirect interactions or abiotic factors largely overpowered them. Two computationally efficient approximations, which controlled for indirect effects with partial correlations or generalized linear models, also performed well. Null models showed no evidence of being able to control for indirect effects, and reliably yielded incorrect inferences when such effects were present.
Paller, CJ; Ye, X; Wozniak, PJ; Gillespie, BK; Sieber, PR; Greengold, RH; Stockton, BR; Hertzman, BL; Efros, MD; Roper, RP; Liker, HR; Carducci, MA
2012-01-01
BACKGROUND Pomegranate juice has been associated with PSA doubling time (PSADT) elongation in a single-arm phase II trial. This study assesses biological activity of two doses of pomegranate extract (POMx) in men with recurrent prostate cancer, using changes in PSADT as the primary outcome. METHODS This randomized, multi-center, double-blind phase II, dose-exploring trial randomized men with a rising PSA and without metastases to receive 1 or 3 g of POMx, stratified by baseline PSADT and Gleason score. Patients (104) were enrolled and treated for up to 18 months. The intent-to-treat (ITT) population was 96% white, with median age 74.5 years and median Gleason score 7. This study was designed to detect a 6-month on-study increase in PSADT from baseline in each arm. RESULTS: Overall, median PSADT in the ITT population lengthened from 11.9 months at baseline to 18.5 months after treatment (P<0.001). PSADT lengthened in the low-dose group from 11.9 to 18.8 months and 12.2 to 17.5 months in the high-dose group, with no significant difference between dose groups (P =0.554). PSADT increases >100% of baseline were observed in 43% of patients. Declining PSA levels were observed in 13 patients (13%). In all, 42% of patients discontinued treatment before meeting the protocol-definition of PSA progression, or 18 months, primarily due to a rising PSA. No significant changes occurred in testosterone. Although no clinically significant toxicities were seen, diarrhea was seen in 1.9% and 13.5% of patients in the 1- and 3-g dose groups, respectively. CONCLUSIONS POMx treatment was associated with ≥6 month increases in PSADT in both treatment arms without adverse effects. The significance of this on-study slowing of PSADT remains unclear, reinforcing the need for placebo-controlled studies in this patient population. PMID:22689129
de Freiras, Guilherme Camponogara; Pozzobon, Roselaine Terezinha; Blaya, Diego Segatto; Moreira, Carlos Heitor
2015-01-01
The aim of the present study was to compare the effects of a topical anesthetic to a placebo on pain perception during administration of local anesthesia in 2 regions of the oral cavity. A split-mouth, double-blind, randomized clinical trial design was used. Thirty-eight subjects, ages 18–50 years, American Society of Anesthesiologists I and II, received 4 anesthetic injections each in regions corresponding to the posterior superior alveolar nerve (PSA) and greater palatine nerve (GPN), totaling 152 sites analyzed. The side of the mouth where the topical anesthetic (benzocaine 20%) or the placebo was to be applied was chosen by a flip of a coin. The needle used was 27G, and the anesthetic used for administration of local anesthesia was 2% lidocaine with 1:100,000 epinephrine. After receiving the administration of local anesthesia, each patient reported pain perception on a visual analog scale (VAS) of 100-mm length. The results showed that the topical anesthetic and the placebo had similar effects: there was no statistically significant VAS difference between the PSA and the GPN pain ratings. A higher value on the VAS for the anesthesia of the GPN, relative to the PSA, was observed for both groups. Regarding gender, male patients had higher values on the VAS compared with female patients, but these differences were not meaningful. The topical anesthetic and the placebo had similar effects on pain perception for injection of local anesthesia for the PSA and GPN. PMID:26061572
de Freiras, Guilherme Camponogara; Pozzobon, Roselaine Terezinha; Blaya, Diego Segatto; Moreira, Carlos Heitor
2015-01-01
The aim of the present study was to compare the effects of a topical anesthetic to a placebo on pain perception during administration of local anesthesia in 2 regions of the oral cavity. A split-mouth, double-blind, randomized clinical trial design was used. Thirty-eight subjects, ages 18-50 years, American Society of Anesthesiologists I and II, received 4 anesthetic injections each in regions corresponding to the posterior superior alveolar nerve (PSA) and greater palatine nerve (GPN), totaling 152 sites analyzed. The side of the mouth where the topical anesthetic (benzocaine 20%) or the placebo was to be applied was chosen by a flip of a coin. The needle used was 27G, and the anesthetic used for administration of local anesthesia was 2% lidocaine with 1:100,000 epinephrine. After receiving the administration of local anesthesia, each patient reported pain perception on a visual analog scale (VAS) of 100-mm length. The results showed that the topical anesthetic and the placebo had similar effects: there was no statistically significant VAS difference between the PSA and the GPN pain ratings. A higher value on the VAS for the anesthesia of the GPN, relative to the PSA, was observed for both groups. Regarding gender, male patients had higher values on the VAS compared with female patients, but these differences were not meaningful. The topical anesthetic and the placebo had similar effects on pain perception for injection of local anesthesia for the PSA and GPN.
dos Santos-Paul, Marcela Alves; Neves, Itamara Lucia Itagiba; Neves, Ricardo Simões; Ramires, José Antonio Franchini
2015-01-01
OBJECTIVE: To investigate the variations in blood glucose levels, hemodynamic effects and patient anxiety scores during tooth extraction in patients with type 2 diabetes mellitus T2DM and coronary disease under local anesthesia with 2% lidocaine with or without epinephrine. STUDY DESIGN: This is a prospective randomized study of 70 patients with T2DM with coronary disease who underwent oral surgery. The study was double blind with respect to the glycemia measurements. Blood glucose levels were continuously monitored for 24 hours using the MiniMed Continuous Glucose Monitoring System. Patients were randomized into two groups: 35 patients received 5.4 mL of 2% lidocaine, and 35 patients received 5.4 mL of 2% lidocaine with 1:100,000 epinephrine. Hemodynamic parameters (blood pressure and heart rate) and anxiety levels were also evaluated. RESULTS: There was no difference in blood glucose levels between the groups at each time point evaluated. Surprisingly, both groups demonstrated a significant decrease in blood glucose levels over time. The groups showed no significant differences in hemodynamic and anxiety status parameters. CONCLUSION: The administration of 5.4 mL of 2% lidocaine with epinephrine neither caused hyperglycemia nor had any significant impact on hemodynamic or anxiety parameters. However, lower blood glucose levels were observed. This is the first report using continuous blood glucose monitoring to show the benefits and lack of side effects of local anesthesia with epinephrine in patients with type 2 diabetes mellitus and coronary disease. PMID:26017649
Mason, Malcolm D.; Parulekar, Wendy R.; Sydes, Matthew R.; Brundage, Michael; Kirkbride, Peter; Gospodarowicz, Mary; Cowan, Richard; Kostashuk, Edmund C.; Anderson, John; Swanson, Gregory; Parmar, Mahesh K.B.; Hayter, Charles; Jovic, Gordana; Hiltz, Andrea; Hetherington, John; Sathya, Jinka; Barber, James B.P.; McKenzie, Michael; El-Sharkawi, Salah; Souhami, Luis; Hardman, P.D. John; Chen, Bingshu E.; Warde, Padraig
2015-01-01
Purpose We have previously reported that radiotherapy (RT) added to androgen-deprivation therapy (ADT) improves survival in men with locally advanced prostate cancer. Here, we report the prespecified final analysis of this randomized trial. Patients and Methods NCIC Clinical Trials Group PR.3/Medical Research Council PR07/Intergroup T94-0110 was a randomized controlled trial of patients with locally advanced prostate cancer. Patients with T3-4, N0/Nx, M0 prostate cancer or T1-2 disease with either prostate-specific antigen (PSA) of more than 40 μg/L or PSA of 20 to 40 μg/L plus Gleason score of 8 to 10 were randomly assigned to lifelong ADT alone or to ADT+RT. The RT dose was 64 to 69 Gy in 35 to 39 fractions to the prostate and pelvis or prostate alone. Overall survival was compared using a log-rank test stratified for prespecified variables. Results One thousand two hundred five patients were randomly assigned between 1995 and 2005, 602 to ADT alone and 603 to ADT+RT. At a median follow-up time of 8 years, 465 patients had died, including 199 patients from prostate cancer. Overall survival was significantly improved in the patients allocated to ADT+RT (hazard ratio [HR], 0.70; 95% CI, 0.57 to 0.85; P < .001). Deaths from prostate cancer were significantly reduced by the addition of RT to ADT (HR, 0.46; 95% CI, 0.34 to 0.61; P < .001). Patients on ADT+RT reported a higher frequency of adverse events related to bowel toxicity, but only two of 589 patients had grade 3 or greater diarrhea at 24 months after RT. Conclusion This analysis demonstrates that the previously reported benefit in survival is maintained at a median follow-up of 8 years and firmly establishes the role of RT in the treatment of men with locally advanced prostate cancer. PMID:25691677
NASA Astrophysics Data System (ADS)
Kawaguchi, Genta; Maesato, Mitsuhiko; Komatsu, Tokutaro; Imakubo, Tatsuro; Kitagawa, Hiroshi
2016-02-01
We present the results of high-pressure transport measurements on the anion-mixed molecular conductors (DIETSe)2M Br2Cl2 [DIETSe = diiodo(ethylenedithio)tetraselenafulvalene; M =Fe , Ga]. They undergo a metal-insulator (M-I) transition below 9 K at ambient pressure, which is suppressed by applying pressure, indicating a spin-density-wave (SDW) transition caused by a nesting instability of the quasi-one-dimensional (Q1D) Fermi surface, as observed in the parent compounds (DIETSe)2M Cl4 (M =Fe , Ga). In the metallic state, the existence of the Q1D Fermi surface is confirmed by observing the Lebed resonance. The critical pressures of the SDW, Pc, of the M Br2Cl2 (M =Fe , Ga) salts are significantly lower than those of the the M Cl4 (M = Fe, Ga) salts, suggesting chemical pressure effects. Above Pc, field-induced SDW transitions appear, as evidenced by kink structures in the magnetoresistance (MR) in both salts. The FeBr2Cl2 salt also shows antiferromagnetic (AF) ordering of d spins at 4 K, below which significant spin-charge coupling is observed. A large positive MR change up to 150% appears above the spin-flop field at high pressure. At low pressure, in particular below Pc, a dip or kink structure appears in MR at the spin-flop field, which shows unconventionally large hysteresis at low temperature (T <1 K). The hysteresis region clearly decreases with increasing pressure towards Pc, strongly indicating that the coexisting SDW plays an important role in the enhancement of magnetic hysteresis besides the random exchange interaction.
A critical appraisal of Markov state models
NASA Astrophysics Data System (ADS)
Schütte, Ch.; Sarich, M.
2015-09-01
Markov State Modelling as a concept for a coarse grained description of the essential kinetics of a molecular system in equilibrium has gained a lot of attention recently. The last 10 years have seen an ever increasing publication activity on how to construct Markov State Models (MSMs) for very different molecular systems ranging from peptides to proteins, from RNA to DNA, and via molecular sensors to molecular aggregation. Simultaneously the accompanying theory behind MSM building and approximation quality has been developed well beyond the concepts and ideas used in practical applications. This article reviews the main theoretical results, provides links to crucial new developments, outlines the full power of MSM building today, and discusses the essential limitations still to overcome.
Estimating Neuronal Ageing with Hidden Markov Models
NASA Astrophysics Data System (ADS)
Wang, Bing; Pham, Tuan D.
2011-06-01
Neuronal degeneration is widely observed in normal ageing, meanwhile the neurode-generative disease like Alzheimer's disease effects neuronal degeneration in a faster way which is considered as faster ageing. Early intervention of such disease could benefit subjects with potentials of positive clinical outcome, therefore, early detection of disease related brain structural alteration is required. In this paper, we propose a computational approach for modelling the MRI-based structure alteration with ageing using hidden Markov model. The proposed hidden Markov model based brain structural model encodes intracortical tissue/fluid distribution using discrete wavelet transformation and vector quantization. Further, it captures gray matter volume loss, which is capable of reflecting subtle intracortical changes with ageing. Experiments were carried out on healthy subjects to validate its accuracy and robustness. Results have shown its ability of predicting the brain age with prediction error of 1.98 years without training data, which shows better result than other age predition methods.
Bagherian, Ali; Sheikhfathollahi, Mahmood
2016-01-01
Background: Topical anesthesia has been widely advocated as an important component of atraumatic administration of intraoral local anesthesia. The aim of this study was to use direct observation of children's behavioral pain reactions during local anesthetic injection using cotton-roll vibration method compared with routine topical anesthesia. Materials and Methods: Forty-eight children participated in this randomized controlled clinical trial. They received two separate inferior alveolar nerve block or primary maxillary molar infiltration injections on contralateral sides of the jaws by both cotton-roll vibration (a combination of topical anesthesia gel, cotton roll, and vibration for physical distraction) and control (routine topical anesthesia) methods. Behavioral pain reactions of children were measured according to the author-developed face, head, foot, hand, trunk, and cry (FHFHTC) scale, resulting in total scores between 0 and 18. Results: The total scores on the FHFHTC scale ranged between 0-5 and 0-10 in the cotton-roll vibration and control methods, respectively. The mean ± standard deviation values of total scores on FHFHTC scale were lower in the cotton-roll vibration method (1.21 ± 1.38) than in control method (2.44 ± 2.18), and this was statistically significant (P < 0.001). Conclusion: It may be concluded that the cotton-roll vibration method can be more helpful than the routine topical anesthesia in reducing behavioral pain reactions in children during local anesthesia administration. PMID:27274349
The cutoff phenomenon in finite Markov chains.
Diaconis, P
1996-01-01
Natural mixing processes modeled by Markov chains often show a sharp cutoff in their convergence to long-time behavior. This paper presents problems where the cutoff can be proved (card shuffling, the Ehrenfests' urn). It shows that chains with polynomial growth (drunkard's walk) do not show cutoffs. The best general understanding of such cutoffs (high multiplicity of second eigenvalues due to symmetry) is explored. Examples are given where the symmetry is broken but the cutoff phenomenon persists. PMID:11607633
Numerical methods in Markov chain modeling
NASA Technical Reports Server (NTRS)
Philippe, Bernard; Saad, Youcef; Stewart, William J.
1989-01-01
Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.
On Measures Driven by Markov Chains
NASA Astrophysics Data System (ADS)
Heurteaux, Yanick; Stos, Andrzej
2014-12-01
We study measures on which are driven by a finite Markov chain and which generalize the famous Bernoulli products.We propose a hands-on approach to determine the structure function and to prove that the multifractal formalism is satisfied. Formulas for the dimension of the measures and for the Hausdorff dimension of their supports are also provided. Finally, we identify the measures with maximal dimension.
Hidden Markov Model Analysis of Multichromophore Photobleaching
Messina, Troy C.; Kim, Hiyun; Giurleo, Jason T.; Talaga, David S.
2007-01-01
The interpretation of single-molecule measurements is greatly complicated by the presence of multiple fluorescent labels. However, many molecular systems of interest consist of multiple interacting components. We investigate this issue using multiply labeled dextran polymers that we intentionally photobleach to the background on a single-molecule basis. Hidden Markov models allow for unsupervised analysis of the data to determine the number of fluorescent subunits involved in the fluorescence intermittency of the 6-carboxy-tetramethylrhodamine labels by counting the discrete steps in fluorescence intensity. The Bayes information criterion allows us to distinguish between hidden Markov models that differ by the number of states, that is, the number of fluorescent molecules. We determine information-theoretical limits and show via Monte Carlo simulations that the hidden Markov model analysis approaches these theoretical limits. This technique has resolving power of one fluorescing unit up to as many as 30 fluorescent dyes with the appropriate choice of dye and adequate detection capability. We discuss the general utility of this method for determining aggregation-state distributions as could appear in many biologically important systems and its adaptability to general photometric experiments. PMID:16913765
Bayesian inversion of seismic attributes for geological facies using a Hidden Markov Model
NASA Astrophysics Data System (ADS)
Nawaz, Muhammad Atif; Curtis, Andrew
2017-02-01
Markov chain Monte-Carlo (McMC) sampling generates correlated random samples such that their distribution would converge to the true distribution only as the number of samples tends to infinity. In practice, McMC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. Although McMC has been used for decades as the algorithm of choice for inference in complex probability distributions, there is a need to seek alternative approaches, particularly in high dimensional problems. Walker & Curtis (2014) developed a method for Bayesian inversion of 2-D spatial data using an exact sampling alternative to McMC which always draws independent samples of the target distribution. Their method thus obviates the need for convergence and removes the concomitant bias exhibited by finite sample sets. Their algorithm is nevertheless computationally intensive and requires large memory. We propose a more efficient method for Bayesian inversion of categorical variables, such as geological facies that requires no sampling at all. The method is based on a 2-D Hidden Markov Model (2D-HMM) over a grid of cells where observations represent localized data constraining each cell. The data in our example application are seismic attributes such as P- and S-wave impedances and rock density; our categorical variables are the hidden states and represent the geological rock types in each cell-facies of distinct subsets of lithology and fluid combinations such as shale, brine-sand and gas-sand. The observations at each location are assumed to be generated from a random function of the hidden state (facies) at that location, and to be distributed according to a certain probability distribution that is independent of hidden states at other locations - an assumption referred to as `localized likelihoods'. The hidden state (facies) at a location cannot be determined solely by the observation at that
Bayesian Inversion of Seismic Attributes for Geological Facies using a Hidden Markov Model
NASA Astrophysics Data System (ADS)
Nawaz, Muhammad Atif; Curtis, Andrew
2016-11-01
Markov chain Monte-Carlo (McMC) sampling generates correlated random samples such that their distribution would converge to the true distribution only as the number of samples tends to infinity. In practice, McMC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. Although McMC has been used for decades as the algorithm of choice for inference in complex probability distributions, there is a need to seek alternative approaches, particularly in high dimensional problems. Walker & Curtis (2014) developed a method for Bayesian inversion of two-dimensional spatial data using an exact sampling alternative to McMC which always draws independent samples of the target distribution. Their method thus obviates the need for convergence and removes the concomitant bias exhibited by finite sample sets. Their algorithm is nevertheless computationally intensive and requires large memory. We propose a more efficient method for Bayesian inversion of categorical variables, such as geological facies that requires no sampling at all. The method is based on a 2D Hidden Markov Model (2D-HMM) over a grid of cells where observations represent localized data constraining each cell. The data in our example application are seismic attributes such as P- and S-wave impedances and rock density; our categorical variables are the hidden states and represent the geological rock types in each cell - facies of distinct subsets of lithology and fluid combinations such as shale, brine-sand and gas-sand. The observations at each location are assumed to be generated from a random function of the hidden state (facies) at that location, and to be distributed according to a certain probability distribution that is independent of hidden states at other locations - an assumption referred to as localized likelihoods. The hidden state (facies) at a location cannot be determined solely by the
Cook, Richard J; Yi, Grace Y; Lee, Ker-Ai; Gladman, Dafna D
2004-06-01
Clustered progressive chronic disease processes arise when interest lies in modeling damage in paired organ systems (e.g., kidneys, eyes), in diseases manifest in different organ systems, or in systemic conditions for which damage may occur in several locations of the body. Multistate Markov models have considerable appeal for modeling damage in such settings, particularly when patients are only under intermittent observation. Generalizations are necessary, however, to deal with the fact that processes within subjects may not be independent. We describe a conditional Markov model in which the clustering in processes within subjects is addressed by the use of multiplicative random effects for each transition intensity. The random effects for the different transition intensities may be correlated within subjects, but are assumed to be independent for different subjects. We apply the mixed Markov model to a motivating data set of patients with psoriatic arthritis, and characterize the progressive course of damage in joints of the hand. A generalization to accommodate a subpopulation of "stayers" and extensions which facilitate regression are indicated and illustrated.
Constructing Dynamic Event Trees from Markov Models
Paolo Bucci; Jason Kirschenbaum; Tunc Aldemir; Curtis Smith; Ted Wood
2006-05-01
In the probabilistic risk assessment (PRA) of process plants, Markov models can be used to model accurately the complex dynamic interactions between plant physical process variables (e.g., temperature, pressure, etc.) and the instrumentation and control system that monitors and manages the process. One limitation of this approach that has prevented its use in nuclear power plant PRAs is the difficulty of integrating the results of a Markov analysis into an existing PRA. In this paper, we explore a new approach to the generation of failure scenarios and their compilation into dynamic event trees from a Markov model of the system. These event trees can be integrated into an existing PRA using software tools such as SAPHIRE. To implement our approach, we first construct a discrete-time Markov chain modeling the system of interest by: a) partitioning the process variable state space into magnitude intervals (cells), b) using analytical equations or a system simulator to determine the transition probabilities between the cells through the cell-to-cell mapping technique, and, c) using given failure/repair data for all the components of interest. The Markov transition matrix thus generated can be thought of as a process model describing the stochastic dynamic behavior of the finite-state system. We can therefore search the state space starting from a set of initial states to explore all possible paths to failure (scenarios) with associated probabilities. We can also construct event trees of arbitrary depth by tracing paths from a chosen initiating event and recording the following events while keeping track of the probabilities associated with each branch in the tree. As an example of our approach, we use the simple level control system often used as benchmark in the literature with one process variable (liquid level in a tank), and three control units: a drain unit and two supply units. Each unit includes a separate level sensor to observe the liquid level in the tank
Antonarakis, Emmanuel S.; Heath, Elisabeth I.; Walczak, Janet R.; Nelson, William G.; Fedor, Helen; De Marzo, Angelo M.; Zahurak, Marianna L.; Piantadosi, Steven; Dannenberg, Andrew J.; Gurganus, Robin T.; Baker, Sharyn D.; Parnes, Howard L.; DeWeese, Theodore L.; Partin, Alan W.; Carducci, Michael A.
2009-01-01
Purpose Cyclooxygenase-2 (COX-2) is a potential pharmacologic target for the prevention of various malignancies, including prostate cancer. We conducted a randomized, double-blind trial to examine the effect of celecoxib on drug-specific biomarkers from prostate tissue obtained at prostatectomy. Patients and Methods Patients with localized prostate cancer and Gleason sum ≥ 7, prostate-specific antigen (PSA) ≥ 15 ng/mL, clinical stage T2b or greater, or any combination with greater than 45% risk of capsular penetration were randomly assigned to celecoxib 400 mg by mouth twice daily or placebo for 4 to 6 weeks before prostatectomy. The primary end point was the difference in prostatic prostaglandin levels between the two groups. Secondary end points were differences in COX-1 and -2 expressions; oxidized DNA bases; and markers of proliferation, apoptosis and angiogenesis. Tissue celecoxib concentrations also were measured. Tertiary end points were drug safety and compliance. Results Seventy-three patients consented, and 64 were randomly assigned and included in the intention-to-treat analysis. There were no treatment differences in any of the primary or secondary outcomes. Multivariable regression revealed that tumor tissue had significantly lower COX-2 expression than benign prostatic tissue (P = .01) and significantly higher levels of the proliferation marker Ki-67 (P < .0001). Celecoxib was measurable in prostate tissue of patients on treatment, demonstrating that celecoxib reached its target. Celecoxib was safe and resulted in only grade 1 toxicities. Conclusion Treatment with 4 to 6 weeks of celecoxib had no effect on intermediate biomarkers of prostate carcinogenesis, despite the achievement of measurable tissue levels. We caution against using celecoxib 400 mg twice daily as a preventive agent for prostate cancer in additional studies. PMID:19720908
Ryu, Sang-Young; Lee, Won-Moo; Kim, Kidong; Park, Sang-Il; Kim, Beob-Jong; Kim, Moon-Hong; Choi, Seok-Cheol; Cho, Chul-Koo; Nam, Byung-Ho; Lee, Eui-Don
2011-11-15
Purpose: To compare compliance, toxicity, and outcome of weekly and triweekly cisplatin administration concurrent with radiotherapy in locally advanced cervical cancer. Methods and Materials: In this open-label, randomized trial, 104 patients with histologically proven Stage IIB-IVA cervical cancer were randomly assigned by a computer-generated procedure to weekly (weekly cisplatin 40 mg/m{sup 2}, six cycles) and triweekly (cisplatin 75 mg/m{sup 2} every 3 weeks, three cycles) chemotherapy arms during concurrent radiotherapy. The difference of compliance and the toxicity profiles between the two arms were investigated, and the overall survival rate was analyzed after 5 years. Results: All patients tolerated both treatments very well, with a high completion rate of scheduled chemotherapy cycles. There was no statistically significant difference in compliance between the two arms (86.3% in the weekly arm, 92.5% in the triweekly arm, p > 0.05). Grade 3-4 neutropenia was more frequent in the weekly arm (39.2%) than in the triweekly arm (22.6%) (p = 0.03). The overall 5-year survival rate was significantly higher in the triweekly arm (88.7%) than in the weekly arm (66.5%) (hazard ratio 0.375; 95% confidence interval 0.154-0.914; p = 0.03). Conclusions: Triweekly cisplatin 75-mg/m{sup 2} chemotherapy concurrent with radiotherapy is more effective and feasible than the conventional weekly cisplatin 40-mg/m{sup 2} regimen and may be a strong candidate for the optimal cisplatin dose and dosing schedule in the treatment of locally advanced cervical cancer.
Karthikeyan, Vilvapathy Senguttuvan; Keshavamurthy, Ramaiah; Mallya, Ashwin; Chikka Moga Siddaiah, Manohar; Kumar, Sumit; Chandrashekar, Chulai Rajabahadhur
2017-01-01
Introduction: Double J (DJ) stents are often removed under local anesthesia using a rigid cystoscope. Patients experience significant pain during this procedure and also continue to have discomfort during voiding for a few days. We assessed the efficacy and safety of preemptive oral diclofenac in pain relief in patients undergoing DJ stent removal (DJSR) by rigid cystoscopy compared to placebo. Methods: Consecutive consenting male patients undergoing DJSR under local anesthesia between March 2014 and July 2015 were enrolled. Patients were randomized to receive 75 mg oral diclofenac (Group A) or placebo (Group B) 1 h before procedure by double-blind randomization. Intraurethral 2% lignocaine gel (25 ml) was used in both groups. Pain during rigid cystoscopy, pain at the first void, and at 24 h after cystoscopy was assessed using visual analog scale (VAS) (0–100). Adverse reactions to diclofenac and episodes of acute urinary retention, if any, were assessed (Trial registered at clinicaltrials.gov: NCT02598102). Results: A total of 121 males (Group A [n = 62]; Group B [n = 59]) underwent stent removal. The median (Interquartile range) VAS during the procedure in Group A was 30 (30) and Group B was 60 (30) (P < 0.001), at first void was 30 (30) and 70 (30) (P < 0.001) and at 24 h postoperatively was 20 (20) and 40 (20) (P < 0.001). The incidence of epigastric pain, nausea, vomiting, and acute urinary retention was comparable in the two groups (P > 0.05). Conclusions: A single oral dose of diclofenac administered 1 h before DJSR using rigid cystoscope under intraurethral lignocaine anesthesia decreases pain significantly during and up to 24 h postprocedure with minimal side effects. PMID:28197031
Shah, Anand; Efstathiou, Jason A.; Paly, Jonathan J.; Halpern, Scott D.; Bruner, Deborah W.; Christodouleas, John P.; Coen, John J.; Deville, Curtiland; Vapiwala, Neha; Shipley, William U.; Zietman, Anthony L.; Hahn, Stephen M.; Bekelman, Justin E.
2012-05-01
Purpose: To investigate patients' willingness to participate (WTP) in a randomized controlled trial (RCT) comparing intensity-modulated radiotherapy (IMRT) with proton beam therapy (PBT) for prostate cancer (PCa). Methods and Materials: We undertook a qualitative research study in which we prospectively enrolled patients with clinically localized PCa. We used purposive sampling to ensure a diverse sample based on age, race, travel distance, and physician. Patients participated in a semi-structured interview in which they reviewed a description of a hypothetical RCT, were asked open-ended and focused follow-up questions regarding their motivations for and concerns about enrollment, and completed a questionnaire assessing characteristics such as demographics and prior knowledge of IMRT or PBT. Patients' stated WTP was assessed using a 6-point Likert scale. Results: Forty-six eligible patients (33 white, 13 black) were enrolled from the practices of eight physicians. We identified 21 factors that impacted patients' WTP, which largely centered on five major themes: altruism/desire to compare treatments, randomization, deference to physician opinion, financial incentives, and time demands/scheduling. Most patients (27 of 46, 59%) stated they would either 'definitely' or 'probably' participate. Seventeen percent (8 of 46) stated they would 'definitely not' or 'probably not' enroll, most of whom (6 of 8) preferred PBT before their physician visit. Conclusions: A substantial proportion of patients indicated high WTP in a RCT comparing IMRT and PBT for PCa.
A non-homogeneous Markov model for phased-mission reliability analysis
NASA Technical Reports Server (NTRS)
Smotherman, Mark; Zemoudeh, Kay
1989-01-01
Three assumptions of Markov modeling for reliability of phased-mission systems that limit flexibility of representation are identified. The proposed generalization has the ability to represent state-dependent behavior, handle phases of random duration using globally time-dependent distributions of phase change time, and model globally time-dependent failure and repair rates. The approach is based on a single nonhomogeneous Markov model in which the concept of state transition is extended to include globally time-dependent phase changes. Phase change times are specified using nonoverlapping distributions with probability distribution functions that are zero outside assigned time intervals; the time intervals are ordered according to the phases. A comparison between a numerical solution of the model and simulation demonstrates that the numerical solution can be several times faster than simulation.
Is anoxic depolarisation associated with an ADC threshold? A Markov chain Monte Carlo analysis.
King, Martin D; Crowder, Martin J; Hand, David J; Harris, Neil G; Williams, Stephen R; Obrenovitch, Tihomir P; Gadian, David G
2005-12-01
A Bayesian nonlinear hierarchical random coefficients model was used in a reanalysis of a previously published longitudinal study of the extracellular direct current (DC)-potential and apparent diffusion coefficient (ADC) responses to focal ischaemia. The main purpose was to examine the data for evidence of an ADC threshold for anoxic depolarisation. A Markov chain Monte Carlo simulation approach was adopted. The Metropolis algorithm was used to generate three parallel Markov chains and thus obtain a sampled posterior probability distribution for each of the DC-potential and ADC model parameters, together with a number of derived parameters. The latter were used in a subsequent threshold analysis. The analysis provided no evidence indicating a consistent and reproducible ADC threshold for anoxic depolarisation.
NASA Astrophysics Data System (ADS)
Staňová, Sidónia; Soták, Ján; Hudec, Norbert
2009-08-01
Methods based on the Markov Chains can be easily applied in the evaluation of order in sedimentary sequences. In this contribution Markov Chain analysis was applied to analysis of turbiditic formation of the Outer Western Carpathians in NW Slovakia, although it also has broader utilization in the interpretation of sedimentary sequences from other depositional environments. Non-random facies transitions were determined in the investigated strata and compared to the standard deep-water facies models to provide statistical evidence for the sedimentological interpretation of depositional processes. As a result, six genetic facies types, interpreted in terms of depositional processes, were identified. They comprise deposits of density flows, turbidity flows, suspension fallout as well as units which resulted from syn- or post-depositional deformation.
A path-independent method for barrier option pricing in hidden Markov models
NASA Astrophysics Data System (ADS)
Rashidi Ranjbar, Hedieh; Seifi, Abbas
2015-12-01
This paper presents a method for barrier option pricing under a Black-Scholes model with Markov switching. We extend the option pricing method of Buffington and Elliott to price continuously monitored barrier options under a Black-Scholes model with regime switching. We use a regime switching random Esscher transform in order to determine an equivalent martingale pricing measure, and then solve the resulting multidimensional integral for pricing barrier options. We have calculated prices for down-and-out call options under a two-state hidden Markov model using two different Monte-Carlo simulation approaches and the proposed method. A comparison of the results shows that our method is faster than Monte-Carlo simulation methods.
Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters.
Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J
2012-10-01
Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.
Recursive recovery of Markov transition probabilities from boundary value data
Patch, Sarah Kathyrn
1994-04-01
In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requires finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.
Markov chains and semi-Markov models in time-to-event analysis
Abner, Erin L.; Charnigo, Richard J.; Kryscio, Richard J.
2014-01-01
A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields. PMID:24818062
Dynamic Bandwidth Provisioning Using Markov Chain Based on RSVP
2013-09-01
Cambridge University Press,2008. [20] P. Bremaud, Markov Chains : Gibbs Fields, Monte Carlo Simulation and Queues, New York, NY, Springer Science...is successful. Qualnet, a simulation platform for the wireless environment is used to simulate the algorithm (integration of Markov chain ...in Qualnet, the simulation platform used. 16 THIS PAGE INTENTIONALLY LEFT BLANK 17 III. GENERAL DISCUSSION OF MARKOV CHAIN ALGORITHM AND RSVP
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
NASA Astrophysics Data System (ADS)
Ababaei, Behnam; Sohrabi, Teymour; Mirzaei, Farhad
2014-10-01
Most stochastic weather generators have their focus on precipitation because it is the most important variable affecting environmental processes. One of the methods to reproduce the precipitation occurrence time series is to use a Markov process. But, in addition to the simulation of short-term autocorrelations in one station, it is sometimes important to preserve the spatial linear correlations (SLC) between neighboring stations as well. In this research, an extension of one-site Markov models was proposed to preserve the SLC between neighboring stations. Qazvin station was utilized as the reference station and Takestan (TK), Magsal, Nirougah, and Taleghan stations were used as the target stations. The performances of different models were assessed in relation to the simulation of dry and wet spells and short-term dependencies in precipitation time series. The results revealed that in TK station, a Markov model with a first-order spatial model could be selected as the best model, while in the other stations, a model with the order of two or three could be selected. The selected (i.e., best) models were assessed in relation to preserving the SLC between neighboring stations. The results depicted that these models were very capable in preserving the SLC between the reference station and any of the target stations. But, their performances were weaker when the SLC between the other stations were compared. In order to resolve this issue, spatially correlated random numbers were utilized instead of independent random numbers while generating synthetic time series using the Markov models. Although this method slightly reduced the model performances in relation to dry and wet spells and short-term dependencies, the improvements related to the simulation of the SLC between the other stations were substantial.
Armstrong, John G.; Gillham, Charles M.; Dunne, Mary T.; Fitzpatrick, David A.; Finn, Marie A.; Cannon, Mairin E.; Taylor, Judy C.; O'Shea, Carmel M.; Buckney, Steven J.; Thirion, Pierre G.
2011-09-01
Purpose: To examine the long-term outcomes of a randomized trial comparing short (4 months; Arm 1) and long (8 months; Arm 2) neoadjuvant hormonal therapy before radiotherapy for localized prostate cancer. Methods and Materials: Between 1997 and 2001, 276 patients were enrolled and the data from 261 were analyzed. The stratification risk factors were prostate-specific antigen level >20 ng/mL, Gleason score {>=}7, and Stage T3 or more. The intermediate-risk stratum had one factor and the high-risk stratum had two or more. Staging was done from the bone scan and computed tomography findings. The primary endpoint was biochemical failure-free survival. Results: The median follow-up was 102 months. The overall survival, biochemical failure-free survival. and prostate cancer-specific survival did not differ significantly between the two treatment arms, overall or at 5 years. The cumulative probability of overall survival at 5 years was 90% (range, 87-92%) in Arm 1 and 83% (range, 80-86%) in Arm 2. The biochemical failure-free survival rate at 5 years was 66% (range, 62-71%) in Arm 1 and 63% (range, 58-67%) in Arm 2. Conclusion: No statistically significant difference was found in biochemical failure-free survival between 4 months and 8 months of neoadjuvant hormonal therapy before radiotherapy for localized prostate cancer.
Kumar, Santosh; Kumar, Sunil; Ganesamoni, Raguram; Mandal, Arup K; Prasad, Seema; Singh, Shrawan K
2011-06-01
The objective of the study was to compare the efficacy of dimethyl sulfoxide (DMSO) mixed with lignocaine and eutectic mixture of local anesthetics (EMLA) cream as topically applied surface anesthetics in relieving pain during shock wave lithotripsy (SWL) in a prospective randomized study. Of the 160 patients, 80 patients received DMSO with lignocaine and 80 patients received EMLA cream, applied to the skin of the flank at the area of entry of shock waves. SWL was done with Seimens lithostar multiline lithotripter. The pain during the procedure was assessed using visual analog and verbal rating scores. The mean visual analog scale scores for the two groups were 3.03 for DMSO group and 4.43 for EMLA group. The difference of pain score on visual analog scale was statistically significant (p < 0.05). Similarly, the pain scores as rated on the verbal rating scale were also evaluated; the mean score on verbal rating scale were 2.34 for DMSO group and 3.00 for the EMLA group. The difference between the pain score on verbal rating scale was also found to be statistically significant (p < 0.05). Our study showed that DMSO with lignocaine is a better local anesthetic agent for SWL than EMLA cream. The stone fragmentation and clearance rates are also better in the DMSO group.
Mansuri, Samir; Bhayat, Ahmed; Omar, Esam; Jarab, Fadi; Ahmed, Mohammad Sami
2011-01-01
The development of local anesthesia in dentistry has marked the beginning of a new era in terms of pain control. Lignocaine is the most commonly used local anesthetic (LA) agent even though it has a vasodilative effect and needs to be combined with adrenaline. Centbucridine is a non-ester, non amide group LA and has not been comprehensively studied in the dental setting and the objective was to compare it to Lignocaine. This was a randomized study comparing the onset time, duration, depth and cardiovascular parameters between Centbucridine (0.5%) and Lignocaine (2%). The study was conducted in the dental outpatient department at the Government Dental College in India on patients attending for the extraction of lower molars. A total of 198 patients were included and there were no significant differences between the LAs except those who received Centbucridine reported a significantly longer duration of anesthesia compared to those who received Lignocaine. None of the patients reported any side effects. Centbucridine was well tolerated and its substantial duration of anesthesia could be attributed to its chemical compound. Centbucridine can be used for dental procedures and can confidently be used in patients who cannot tolerate Lignocaine or where adrenaline is contraindicated.
Pöpping, Daniel M; Elia, Nadia; Marret, Emmanuel; Wenk, Manuel; Tramèr, Martin R
2012-04-01
Opioids are widely used as additives to local anesthetics for intrathecal anesthesia. Benefit and risk remain unclear. We systematically searched databases and bibliographies to February 2011 for full reports of randomized comparisons of any opioid added to any intrathecal local anesthetic with the local anesthetic alone in adults undergoing surgery (except cesarean section) and receiving single-shot intrathecal anesthesia without general anesthesia. We included 65 trials (3338 patients, 1932 of whom received opioids) published between 1983 and 2010. Morphine (0.05-2mg) and fentanyl (10-50 μg) added to bupivacaine were the most frequently tested. Duration of postoperative analgesia was prolonged with morphine (weighted mean difference 503 min; 95% confidence interval [CI] 315 to 641) and fentanyl (weighted mean difference 114 min; 95% CI 60 to 168). Morphine decreased the number of patients needing opioid analgesia after surgery and decreased pain intensity to the 12th postoperative hour. Morphine increased the risk of nausea (number needed to harm [NNH] 9.9), vomiting (NNH 10), urinary retention (NNH 6.5), and pruritus (NNH 4.4). Fentanyl increased the risk of pruritus (NNH 3.3). With morphine 0.05 to 0.5mg, the NNH for respiratory depression varied between 38 and 59 depending on the definition of respiratory depression chosen. With fentanyl 10 to 40 μg, the risk of respiratory depression was not significantly increased. For none of these effects, beneficial or harmful, was there evidence of dose-responsiveness. Consequently, minimal effective doses of intrathecal morphine and fentanyl should be sought. For intrathecal buprenorphine, diamorphine, hydromorphone, meperidine, methadone, pentazocine, sufentanil, and tramadol, there were not enough data to allow for meaningful conclusions.
Generator estimation of Markov jump processes
NASA Astrophysics Data System (ADS)
Metzner, P.; Dittmer, E.; Jahnke, T.; Schütte, Ch.
2007-11-01
Estimating the generator of a continuous-time Markov jump process based on incomplete data is a problem which arises in various applications ranging from machine learning to molecular dynamics. Several methods have been devised for this purpose: a quadratic programming approach (cf. [D.T. Crommelin, E. Vanden-Eijnden, Fitting timeseries by continuous-time Markov chains: a quadratic programming approach, J. Comp. Phys. 217 (2006) 782-805]), a resolvent method (cf. [T. Müller, Modellierung von Proteinevolution, PhD thesis, Heidelberg, 2001]), and various implementations of an expectation-maximization algorithm ([S. Asmussen, O. Nerman, M. Olsson, Fitting phase-type distributions via the EM algorithm, Scand. J. Stat. 23 (1996) 419-441; I. Holmes, G.M. Rubin, An expectation maximization algorithm for training hidden substitution models, J. Mol. Biol. 317 (2002) 753-764; U. Nodelman, C.R. Shelton, D. Koller, Expectation maximization and complex duration distributions for continuous time Bayesian networks, in: Proceedings of the twenty-first conference on uncertainty in AI (UAI), 2005, pp. 421-430; M. Bladt, M. Sørensen, Statistical inference for discretely observed Markov jump processes, J.R. Statist. Soc. B 67 (2005) 395-410]). Some of these methods, however, seem to be known only in a particular research community, and have later been reinvented in a different context. The purpose of this paper is to compile a catalogue of existing approaches, to compare the strengths and weaknesses, and to test their performance in a series of numerical examples. These examples include carefully chosen model problems and an application to a time series from molecular dynamics.
Algorithms for the Markov entropy decomposition
NASA Astrophysics Data System (ADS)
Ferris, Andrew J.; Poulin, David
2013-05-01
The Markov entropy decomposition (MED) is a recently proposed, cluster-based simulation method for finite temperature quantum systems with arbitrary geometry. In this paper, we detail numerical algorithms for performing the required steps of the MED, principally solving a minimization problem with a preconditioned Newton's algorithm, as well as how to extract global susceptibilities and thermal responses. We demonstrate the power of the method with the spin-1/2 XXZ model on the 2D square lattice, including the extraction of critical points and details of each phase. Although the method shares some qualitative similarities with exact diagonalization, we show that the MED is both more accurate and significantly more flexible.
Markov chain Monte Carlo without likelihoods.
Marjoram, Paul; Molitor, John; Plagnol, Vincent; Tavare, Simon
2003-12-23
Many stochastic simulation approaches for generating observations from a posterior distribution depend on knowing a likelihood function. However, for many complex probability models, such likelihoods are either impossible or computationally prohibitive to obtain. Here we present a Markov chain Monte Carlo method for generating observations from a posterior distribution without the use of likelihoods. It can also be used in frequentist applications, in particular for maximum-likelihood estimation. The approach is illustrated by an example of ancestral inference in population genetics. A number of open problems are highlighted in the discussion.
Hybrid Discrete-Continuous Markov Decision Processes
NASA Technical Reports Server (NTRS)
Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich
2003-01-01
This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.
McKay, Rana R.; Zurita, Amado J.; Werner, Lillian; Bruce, Justine Y.; Carducci, Michael A.; Stein, Mark N.; Heath, Elisabeth I.; Hussain, Arif; Tran, Hai T.; Sweeney, Christopher J.; Ross, Robert W.; Kantoff, Philip W.; Slovin, Susan F.
2016-01-01
Purpose Patients with recurrent prostate cancer after local treatment make up a heterogeneous population for whom androgen deprivation therapy (ADT) is the usual treatment. The purpose of this randomized phase II trial was to investigate the efficacy and toxicity of short-course ADT with or without bevacizumab in men with hormone-sensitive prostate cancer. Patients and Methods Eligible patients had an increasing prostate-specific antigen (PSA) of ≤ 50 ng/mL and PSA doubling time of less than 18 months. Patients had either no metastases or low burden, asymptomatic metastases (lymph nodes < 3 cm and five or fewer bone metastases). Patients were randomly assigned 2:1 to a luteinizing hormone-releasing hormone agonist, bicalutamide and bevacizumab or ADT alone, for 6 months. The primary end point was PSA relapse-free survival (RFS). Relapse was defined as a PSA of more than 0.2 ng/mL for prostatectomy patients or PSA of more than 2.0 ng/mL for primary radiation therapy patients. Results Sixty-six patients received ADT + bevacizumab and 36 received ADT alone. Patients receiving ADT + bevacizumab had a statistically significant improvement in RFS compared with patients treated with ADT alone (13.3 months for ADT + bevacizumab v 10.2 months for ADT alone; hazard ratio, 0.47; 95% CI, 0.29 to 0.77; log-rank P = .002). Hypertension was the most common adverse event in patients receiving ADT + bevacizumab (36%). Conclusion ADT combined with bevacizumab resulted in an improved RFS for patients with hormone-sensitive prostate cancer. Long-term follow-up is needed to determine whether some patients have a durable PSA response and are able to remain off ADT for prolonged periods. Our data provide rationale for combining vascular endothelial growth factor–targeting therapy with ADT in hormone-sensitive prostate cancer. PMID:27044933
Penala, Soumya; Kalakonda, Butchibabu; Pathakota, Krishnajaneya Reddy; Jayakumar, Avula; Koppolu, Pradeep; Lakshmi, Bolla Vijaya; Pandey, Ruchi; Mishra, Ashank
2016-01-01
Objective: Periodontitis is known to have multifactorial etiology, involving interplay between environmental, host and microbial factors. The current treatment approaches are aimed at reducing the pathogenic microorganisms. Administration of beneficial bacteria (probiotics) has emerged as a promising concept in the prevention and treatment of periodontitis. Thus, the aim of the present study is to evaluate the efficacy of the local use of probiotics as an adjunct to scaling and root planing (SRP) in the treatment of patients with chronic periodontitis and halitosis. Methods: This is a randomized, placebo-controlled, double-blinded trial involving 32 systemically healthy chronic periodontitis patients. After SRP, the subjects were randomly assigned into the test and control groups. Test group (SRP + probiotics) received subgingival delivery of probiotics and probiotic mouthwash, and control group (SRP + placebo) received subgingival delivery of placebo and placebo mouthwash for 15 days. Plaque index (PI), modified gingival index (MGI), and bleeding index (BI) were assessed at baseline, 1 and 3 months thereafter, whereas probing depth (PD) and clinical attachment level were assessed at baseline and after 3 months. Microbial assessment using N-benzoyl-DL-arginine-naphthylamide (BANA) and halitosis assessment using organoleptic scores (ORG) was done at baseline, 1 and 3 months. Findings: All the clinical and microbiological parameters were significantly reduced in both groups at the end of the study. Inter-group comparison of PD reduction (PDR) and clinical attachment gain (CAG) revealed no statistical significance except for PDR in moderate pockets for the test group. Test group has shown statistically significant improvement in PI, MGI, and BI at 3 months compared to control group. Inter-group comparison revealed a significant reduction in BANA in test group at 1 month. ORG were significantly reduced in test group when compared to control group. Conclusion: Within
Metagenomic Classification Using an Abstraction Augmented Markov Model
Zhu, Xiujun (Sylvia)
2016-01-01
Abstract The abstraction augmented Markov model (AAMM) is an extension of a Markov model that can be used for the analysis of genetic sequences. It is developed using the frequencies of all possible consecutive words with same length (p-mers). This article will review the theory behind AAMM and apply the theory behind AAMM in metagenomic classification. PMID:26618474
Lifting—A nonreversible Markov chain Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Vucelja, Marija
2016-12-01
Markov chain Monte Carlo algorithms are invaluable tools for exploring stationary properties of physical systems, especially in situations where direct sampling is unfeasible. Common implementations of Monte Carlo algorithms employ reversible Markov chains. Reversible chains obey detailed balance and thus ensure that the system will eventually relax to equilibrium, though detailed balance is not necessary for convergence to equilibrium. We review nonreversible Markov chains, which violate detailed balance and yet still relax to a given target stationary distribution. In particular cases, nonreversible Markov chains are substantially better at sampling than the conventional reversible Markov chains with up to a square root improvement in the convergence time to the steady state. One kind of nonreversible Markov chain is constructed from the reversible ones by enlarging the state space and by modifying and adding extra transition rates to create non-reversible moves. Because of the augmentation of the state space, such chains are often referred to as lifted Markov Chains. We illustrate the use of lifted Markov chains for efficient sampling on several examples. The examples include sampling on a ring, sampling on a torus, the Ising model on a complete graph, and the one-dimensional Ising model. We also provide a pseudocode implementation, review related work, and discuss the applicability of such methods.
Limit measures for affine cellular automata on topological Markov subgroups
NASA Astrophysics Data System (ADS)
Maass, Alejandro; Martínez, Servet; Sobottka, Marcelo
2006-09-01
Consider a topological Markov subgroup which is ps-torsion (with p prime) and an affine cellular automaton defined on it. We show that the Cesàro mean of the iterates, by the automaton of a probability measure with complete connections and summable memory decay that is compatible with the topological Markov subgroup, converges to the Haar measure.
Protein family classification using sparse markov transducers.
Eskin, Eleazar; Noble, William Stafford; Singer, Yoram
2003-01-01
We present a method for classifying proteins into families based on short subsequences of amino acids using a new probabilistic model called sparse Markov transducers (SMT). We classify a protein by estimating probability distributions over subsequences of amino acids from the protein. Sparse Markov transducers, similar to probabilistic suffix trees, estimate a probability distribution conditioned on an input sequence. SMTs generalize probabilistic suffix trees by allowing for wild-cards in the conditioning sequences. Since substitutions of amino acids are common in protein families, incorporating wild-cards into the model significantly improves classification performance. We present two models for building protein family classifiers using SMTs. As protein databases become larger, data driven learning algorithms for probabilistic models such as SMTs will require vast amounts of memory. We therefore describe and use efficient data structures to improve the memory usage of SMTs. We evaluate SMTs by building protein family classifiers using the Pfam and SCOP databases and compare our results to previously published results and state-of-the-art protein homology detection methods. SMTs outperform previous probabilistic suffix tree methods and under certain conditions perform comparably to state-of-the-art protein homology methods.
Non-Markov effects in intersecting sprays
NASA Astrophysics Data System (ADS)
Panchagnula, Mahesh; Kumaran, Dhivyaraja; Deevi, Sri Vallabha; Tangirala, Arun
2016-11-01
Sprays have been assumed to follow a Markov process. In this study, we revisit that assumption relying on experimental data from intersecting and non-intersecting sprays. A phase Doppler Particle Analyzer (PDPA) is used to measure particle diameter and velocity at various axial locations in the intersection region of two sprays. Measurements of single sprays, with one nozzle turned off alternatively are also obtained at the same locations. This data, treated as an unstructured time series is classified into three bins each for diameter (small, medium, large) and velocity (slow, medium, fast). Conditional probability analysis on this binned data showed a higher static correlation between droplet velocities, while diameter correlation is significantly alleviated (reduced) in intersecting sprays, compared to single sprays. Further analysis using serial correlation measures: auto-correlation function (ACF) and partial auto-correlation function (PACF) shows that the lagged correlations in droplet velocity are enhanced while those in the droplet diameter are significantly debilitated in intersecting sprays. We show that sprays are not necessarily Markov processes and that memory persists, even though curtailed to fewer lags in case of size, and enhanced in case of droplet velocity.
Equilibrium Control Policies for Markov Chains
Malikopoulos, Andreas
2011-01-01
The average cost criterion has held great intuitive appeal and has attracted considerable attention. It is widely employed when controlling dynamic systems that evolve stochastically over time by means of formulating an optimization problem to achieve long-term goals efficiently. The average cost criterion is especially appealing when the decision-making process is long compared to other timescales involved, and there is no compelling motivation to select short-term optimization. This paper addresses the problem of controlling a Markov chain so as to minimize the average cost per unit time. Our approach treats the problem as a dual constrained optimization problem. We derive conditions guaranteeing that a saddle point exists for the new dual problem and we show that this saddle point is an equilibrium control policy for each state of the Markov chain. For practical situations with constraints consistent to those we study here, our results imply that recognition of such saddle points may be of value in deriving in real time an optimal control policy.
Markov Chain Monte Carlo and Irreversibility
NASA Astrophysics Data System (ADS)
Ottobre, Michela
2016-06-01
Markov Chain Monte Carlo (MCMC) methods are statistical methods designed to sample from a given measure π by constructing a Markov chain that has π as invariant measure and that converges to π. Most MCMC algorithms make use of chains that satisfy the detailed balance condition with respect to π; such chains are therefore reversible. On the other hand, recent work [18, 21, 28, 29] has stressed several advantages of using irreversible processes for sampling. Roughly speaking, irreversible diffusions converge to equilibrium faster (and lead to smaller asymptotic variance as well). In this paper we discuss some of the recent progress in the study of nonreversible MCMC methods. In particular: i) we explain some of the difficulties that arise in the analysis of nonreversible processes and we discuss some analytical methods to approach the study of continuous-time irreversible diffusions; ii) most of the rigorous results on irreversible diffusions are available for continuous-time processes; however, for computational purposes one needs to discretize such dynamics. It is well known that the resulting discretized chain will not, in general, retain all the good properties of the process that it is obtained from. In particular, if we want to preserve the invariance of the target measure, the chain might no longer be reversible. Therefore iii) we conclude by presenting an MCMC algorithm, the SOL-HMC algorithm [23], which results from a nonreversible discretization of a nonreversible dynamics.
Stochastic seismic tomography by interacting Markov chains
NASA Astrophysics Data System (ADS)
Bottero, Alexis; Gesret, Alexandrine; Romary, Thomas; Noble, Mark; Maisons, Christophe
2016-10-01
Markov chain Monte Carlo sampling methods are widely used for non-linear Bayesian inversion where no analytical expression for the forward relation between data and model parameters is available. Contrary to the linear(ized) approaches, they naturally allow to evaluate the uncertainties on the model found. Nevertheless their use is problematic in high-dimensional model spaces especially when the computational cost of the forward problem is significant and/or the a posteriori distribution is multimodal. In this case, the chain can stay stuck in one of the modes and hence not provide an exhaustive sampling of the distribution of interest. We present here a still relatively unknown algorithm that allows interaction between several Markov chains at different temperatures. These interactions (based on importance resampling) ensure a robust sampling of any posterior distribution and thus provide a way to efficiently tackle complex fully non-linear inverse problems. The algorithm is easy to implement and is well adapted to run on parallel supercomputers. In this paper, the algorithm is first introduced and applied to a synthetic multimodal distribution in order to demonstrate its robustness and efficiency compared to a simulated annealing method. It is then applied in the framework of first arrival traveltime seismic tomography on real data recorded in the context of hydraulic fracturing. To carry out this study a wavelet-based adaptive model parametrization has been used. This allows to integrate the a priori information provided by sonic logs and to reduce optimally the dimension of the problem.
Neyman, Markov processes and survival analysis.
Yang, Grace
2013-07-01
J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.
Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions
Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G. E-mail: gerhard.hummer@biophys.mpg.de; Hummer, Gerhard E-mail: gerhard.hummer@biophys.mpg.de
2014-09-21
Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.
Characterization of the rat exploratory behavior in the elevated plus-maze with Markov chains.
Tejada, Julián; Bosco, Geraldine G; Morato, Silvio; Roque, Antonio C
2010-11-30
The elevated plus-maze is an animal model of anxiety used to study the effect of different drugs on the behavior of the animal. It consists of a plus-shaped maze with two open and two closed arms elevated 50cm from the floor. The standard measures used to characterize exploratory behavior in the elevated plus-maze are the time spent and the number of entries in the open arms. In this work, we use Markov chains to characterize the exploratory behavior of the rat in the elevated plus-maze under three different conditions: normal and under the effects of anxiogenic and anxiolytic drugs. The spatial structure of the elevated plus-maze is divided into squares, which are associated with states of a Markov chain. By counting the frequencies of transitions between states during 5-min sessions in the elevated plus-maze, we constructed stochastic matrices for the three conditions studied. The stochastic matrices show specific patterns, which correspond to the observed behaviors of the rat under the three different conditions. For the control group, the stochastic matrix shows a clear preference for places in the closed arms. This preference is enhanced for the anxiogenic group. For the anxiolytic group, the stochastic matrix shows a pattern similar to a random walk. Our results suggest that Markov chains can be used together with the standard measures to characterize the rat behavior in the elevated plus-maze.
Modeling anomalous radar propagation using first-order two-state Markov chains
NASA Astrophysics Data System (ADS)
Haddad, B.; Adane, A.; Mesnard, F.; Sauvageot, H.
In this paper, it is shown that radar echoes due to anomalous propagations (AP) can be modeled using Markov chains. For this purpose, images obtained in southwestern France by means of an S-band meteorological radar recorded every 5 min in 1996 were considered. The daily mean surfaces of AP appearing in these images are sorted into two states and their variations are then represented by a binary random variable. The Markov transition matrix, the 1-day-lag autocorrelation coefficient as well as the long-term probability of having each of both states are calculated on a monthly basis. The same kind of modeling was also applied to the rainfall observed in the radar dataset under study. The first-order two-state Markov chains are then found to fit the daily variations of either AP or rainfall areas very well. For each month of the year, the surfaces filled by both types of echo follow similar stochastic distributions, but their autocorrelation coefficient is different. Hence, it is suggested that this coefficient is a discriminant factor which could be used, among other criteria, to improve the identification of AP in radar images.
Test to determine the Markov order of a time series.
Racca, E; Laio, F; Poggi, D; Ridolfi, L
2007-01-01
The Markov order of a time series is an important measure of the "memory" of a process, and its knowledge is fundamental for the correct simulation of the characteristics of the process. For this reason, several techniques have been proposed in the past for its estimation. However, most of this methods are rather complex, and often can be applied only in the case of Markov chains. Here we propose a simple and robust test to evaluate the Markov order of a time series. Only the first-order moment of the conditional probability density function characterizing the process is used to evaluate the memory of the process itself. This measure is called the "expected value Markov (EVM) order." We show that there is good agreement between the EVM order and the known Markov order of some synthetic time series.
Harnessing graphical structure in Markov chain Monte Carlo learning
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.
Häfner, Hans-Martin; Schmid, Ute; Moehrle, Matthias; Strölin, Anke; Breuninger, Helmut
2008-01-01
Vascular effects of local anesthetics are especially important in dermatological surgery. In particular, adequate perfusion must be ensured in order to offset surgical manipulations during surgical interventions at the acra. However, the use of adrenaline additives appears fraught with problems when anesthesia affects the terminal vascular system, particularly during interventions at the fingers, toes, penis, outer ears, and tip of the nose. We studied skin blood flux at the fingerpads via laser Doppler flowmetry over the course of 24 hours in a prospective, double-blind, randomized, placebo-controlled study with 20 vascularly healthy test persons following Oberst's-method anesthetic blocks. In each case, 6 ml ropivacaine (7.5 mg/ml) (A), lidocaine 1% without an additive (B), and lidocaine 1% with an adrenaline additive (1:200,000) (C) was used respectively as a verum. Isotonic saline solution was injected as a placebo (D). Measurements were carried out with the aid of a computer simultaneously at D II and D IV on both hands. Administration of (A) led to increased blood flux (+155.2%); of (B) initially to a decrease of 27%; of (C) to a reduction of 55% which was reversible after 40 minutes and of (D) to no change.(A) resulted in sustained vasodilatation which was still demonstrable after 24 h. (B) had notably less vasodilative effect, although comparison with (D) clearly showed that (B) is indeed vasodilative. (C) resulted in only a passing decrease in perfusion; this was no longer measurable when checked after 6 and 24 h. This transient inadequacy of blood flux also appeared after administration of (D). These tests show that adrenaline additive in local anesthesia does not decrease blood flow more than 55% for a period of 16 min. Following these results an adrenaline additive can be safely used for anesthetic blocks at the acra in healthy persons.
Tan, Terence; Lim, Wan-Teck; Fong, Kam-Weng; Cheah, Shie-Lee; Soong, Yoke-Lim; Ang, Mei-Kim; Ng, Quan-Sing; Tan, Daniel; Ong, Whee-Sze; Tan, Sze-Huey; Yip, Connie; Quah, Daniel; Soo, Khee-Chee; Wee, Joseph
2015-04-01
Purpose: To compare survival, tumor control, toxicities, and quality of life of patients with locally advanced nasopharyngeal carcinoma (NPC) treated with induction chemotherapy and concurrent chemo-radiation (CCRT), against CCRT alone. Patients and Methods: Patients were stratified by N stage and randomized to induction GCP (3 cycles of gemcitabine 1000 mg/m{sup 2}, carboplatin area under the concentration-time-curve 2.5, and paclitaxel 70 mg/m{sup 2} given days 1 and 8 every 21 days) followed by CCRT (radiation therapy 69.96 Gy with weekly cisplatin 40 mg/m{sup 2}), or CCRT alone. The accrual of 172 was planned to detect a 15% difference in 5-year overall survival (OS) with a 5% significance level and 80% power. Results: Between September 2004 and August 2012, 180 patients were accrued, and 172 (GCP 86, control 86) were analyzed by intention to treat. There was no significant difference in OS (3-year OS 94.3% [GCP] vs 92.3% [control]; hazard ratio 1.05; 1-sided P=.494]), disease-free survival (hazard ratio 0.77, 95% confidence interval 0.44-1.35, P=.362), and distant metastases–free survival (hazard ratio 0.80, 95% confidence interval 0.38-1.67, P=.547) between the 2 arms. Treatment compliance in the induction phase was good, but the relative dose intensity for concurrent cisplatin was significantly lower in the GCP arm. Overall, the GCP arm had higher rates of grades 3 and 4 leukopenia (52% vs 37%) and neutropenia (24% vs 12%), but grade 3 and 4 acute radiation toxicities were not statistically different between the 2 arms. The global quality of life scores were comparable in both arms. Conclusion: Induction chemotherapy with GCP before concurrent chemo-irradiation did not improve survival in locally advanced NPC.
Afolabi, Oluwatola; Murphy, Amanda; Chung, Bryan; Lalonde, Donald H
2013-01-01
BACKGROUND: The acidity of lidocaine preparations is believed to contribute to the pain of local anesthetic injection. OBJECTIVE: To investigate the effect of buffering lidocaine on the pain of injection and duration of anesthetic effect. METHODS: A double-blind, randomized trial involving 44 healthy volunteers was conducted. The upper lip was injected with a solution of: lidocaine 1% (Xylocaine, AstraZeneca, Canada, Inc) with epinephrine; and lidocaine 1% with epinephrine and 8.4% sodium bicarbonate. Volunteers reported pain of injection and duration of anesthetic effect. RESULTS: Twenty-six participants found the unbuffered solution to be more painful. Fifteen participants found the buffered solution to be more painful; the difference was not statistically significant. Twenty-one volunteers reported duration of anesthetic effect. The buffered solution provided longer anesthetic effect than the unbuffered solution (P=0.004). CONCLUSION: Although buffering increased the duration of lidocaine’s anesthetic effect in this particular model, a decrease in the pain of the injection was not demonstrated, likely due to limitations of the study. PMID:24497759
Meechan, J. G.; Day, P. F.
2002-01-01
The authors report a clinical trial designed to compare the discomfort produced by plain and epinephrine-containing lidocaine solutions during local anesthesia in the maxilla. Twenty-four healthy volunteers were recruited; each received buccal and palatal infiltrations on each side of the maxilla in the premolar region. The solutions were 2% lidocaine and 2% lidocaine with 1:80,000 epinephrine. Allocation to side was randomized and operator and volunteer were blinded to the identity of the solutions. Volunteers recorded injection discomfort on a 100-mm visual analogue scale (VAS). Volunteers were included in the trial if a score of at least 30 mm was recorded for at least 1 of the matched pair of injections. Differences between treatments were measured using Student's paired t test. Twelve volunteers recorded a VAS score of at least 30 mm for 1 or both buccal injections, and 17 volunteers reached this score for palatal injections. Buccal injection pain was less when the plain solution was used (P = .04) and was not influenced by the order of the injection. Palatal injection discomfort did not differ between the solutions; however, the second palatal injection was more uncomfortable than the first palatal injection (P = .046). These results suggest that plain lidocaine produces less discomfort than lidocaine with epinephrine when administered into the maxillary premolar buccal sulcus in individuals who report moderate pain during this injection. Palatal injection discomfort does not differ between these solutions. PMID:15384291
Han, Yu; Sheng, Ke; Su, Meilan; Yang, Nan; Wan, Dong
2017-01-01
Background Previous studies reported that the mild hypothermia therapy (MHT) could significantly improve the clinical outcomes for patients with hypertensive intracerebral hemorrhage (HICH). Therefore, this meta-analysis was conducted to systematically assess whether the addition of local MHT (LMHT) could significantly improve the efficacy of minimally invasive surgery (MIS) in treating HICH. Methods Randomized clinical trials on the combined application of MIS and LMHT (MIS+LMHT) vs MIS alone for treating HICH were searched up to September 2016 in databases. Response rate and mortality rate were the primary outcomes, and the neurologic function and Barthel index were the secondary outcomes. Side effects were also analyzed. Results Totally, 28 studies composed of 2,325 patients were included to compare the efficacy of MIS+LMHT to MIS alone. The therapeutic effects of MIS+LMHT were significantly better than MIS alone. The pooled odds ratio of response rate and mortality rate was 2.68 (95% confidence interval [CI]=2.22–3.24) and 0.43 (95% CI=0.32–0.57), respectively. In addition, the MIS+LMHT led to a significantly better improvement in the neurologic function and activities of daily living. The incidence of pneumonia was similar between the two treatment methods. Conclusion These results indicated that compared to MIS alone, the MIS+LMHT could be more effective for the acute treatment of patients with HICH. This treatment modality should be further explored and optimized. PMID:28096671
Bachmann, Talis; Luiga, Iiris; Põder, Endel
2004-12-01
The forward masking of faces by spatially quantized masking images was studied. Masks were used in order to exert different types of degrading effects on the early representations in facial information processing. Three types of source images for masks were used: Same-face images (with regard to targets), different-face images, and random Gaussian noise that was spectrally similar to facial images. They were all spatially quantized over the same range of quantization values. Same-face masks had virtually no masking effect at any of the quantization values. Different-face masks had strong masking effects only with fine-scale quantization, but led to the same efficiency of recognition as in the same-face mask condition with the coarsest quantization. Moreover, compared with the noise-mask condition, coarsely quantized different-face masks led to a relatively facilitated level of recognition efficiency. The masking effect of the noise mask did not vary significantly with the coarseness of quantization. The results supported neither a local feature processing account, nor a generalized spatial-frequency processing account, but were consistent with the microgenetic configuration-processing theory of face recognition. Also, the suitability of a spatial quantization technique for image configuration processing research has been demonstrated.
Marques, José; Pié-Sánchez, Jordi; Valmaseda-Castellón, Eduard; Gay-Escoda, Cosme
2014-01-01
Objectives: The aim of this study is to compare the analgesic and anti-inflammatory effects of the local postoperative administration of a single 12-mg dose of betamethasone after the surgical removal of impacted lower third molars. Study Design: A split-mouth, triple-blind, randomized, placebo-controlled clinical trial of 25 patients requiring the surgical removal of symmetrical lower third molars was performed. In the experimental side, a 12-mg dose of betamethasone was administered submucosally after the surgical procedure, while in the control side a placebo (sterile saline solution) was injected in the same area. To assess postoperative pain, visual analogue scales and the consumption of rescue analgesic were used. The facial swelling and trismus were evaluated by measuring facial reference distances and maximum mouth opening. Results: There were no significant differences between the two study groups regarding postoperative pain, facial swelling and trismus. Conclusions: The injection of a single dose of betamethasone does not seem to reduce pain, facial swelling and trismus after impacted lower third molar removal when compared to placebo. Key words:Third molar extraction, corticosteroids, betamethasone. PMID:24121915
Differential evolution Markov chain with snooker updater and fewer chains
Vrugt, Jasper A; Ter Braak, Cajo J F
2008-01-01
Differential Evolution Markov Chain (DE-MC) is an adaptive MCMC algorithm, in which multiple chains are run in parallel. Standard DE-MC requires at least N=2d chains to be run in parallel, where d is the dimensionality of the posterior. This paper extends DE-MC with a snooker updater and shows by simulation and real examples that DE-MC can work for d up to 50--100 with fewer parallel chains (e.g. N=3) by exploiting information from their past by generating jumps from differences of pairs of past states. This approach extends the practical applicability of DE-MC and is shown to be about 5--26 times more efficient than the optimal Normal random walk Metropolis sampler for the 97.5% point of a variable from a 25--50 dimensional Student T{sub 3} distribution. In a nonlinear mixed effects model example the approach outperformed a block-updater geared to the specific features of the model.
Volatility: A hidden Markov process in financial time series
NASA Astrophysics Data System (ADS)
Eisler, Zoltán; Perelló, Josep; Masoliver, Jaume
2007-11-01
Volatility characterizes the amplitude of price return fluctuations. It is a central magnitude in finance closely related to the risk of holding a certain asset. Despite its popularity on trading floors, volatility is unobservable and only the price is known. Diffusion theory has many common points with the research on volatility, the key of the analogy being that volatility is a time-dependent diffusion coefficient of the random walk for the price return. We present a formal procedure to extract volatility from price data by assuming that it is described by a hidden Markov process which together with the price forms a two-dimensional diffusion process. We derive a maximum-likelihood estimate of the volatility path valid for a wide class of two-dimensional diffusion processes. The choice of the exponential Ornstein-Uhlenbeck (expOU) stochastic volatility model performs remarkably well in inferring the hidden state of volatility. The formalism is applied to the Dow Jones index. The main results are that (i) the distribution of estimated volatility is lognormal, which is consistent with the expOU model, (ii) the estimated volatility is related to trading volume by a power law of the form σ∝V0.55 , and (iii) future returns are proportional to the current volatility, which suggests some degree of predictability for the size of future returns.
MARKOV CHAIN MONTE CARLO POSTERIOR SAMPLING WITH THE HAMILTONIAN METHOD
K. HANSON
2001-02-01
The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy {var_phi}, where {var_phi} is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of {var_phi} and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of {var_phi}, is proposed to measure the convergence of the MCMC sequence.
Bayesian seismic tomography by parallel interacting Markov chains
NASA Astrophysics Data System (ADS)
Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas
2014-05-01
The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the
A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes
NASA Technical Reports Server (NTRS)
Carpenter, Russell; Lee, Taesul
2008-01-01
Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.
Gu, M G; Kong, F H
1998-06-23
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Markov chain Monte Carlo inference for Markov jump processes via the linear noise approximation.
Stathopoulos, Vassilios; Girolami, Mark A
2013-02-13
Bayesian analysis for Markov jump processes (MJPs) is a non-trivial and challenging problem. Although exact inference is theoretically possible, it is computationally demanding, thus its applicability is limited to a small class of problems. In this paper, we describe the application of Riemann manifold Markov chain Monte Carlo (MCMC) methods using an approximation to the likelihood of the MJP that is valid when the system modelled is near its thermodynamic limit. The proposed approach is both statistically and computationally efficient whereas the convergence rate and mixing of the chains allow for fast MCMC inference. The methodology is evaluated using numerical simulations on two problems from chemical kinetics and one from systems biology.
Transition-Independent Decentralized Markov Decision Processes
NASA Technical Reports Server (NTRS)
Becker, Raphen; Silberstein, Shlomo; Lesser, Victor; Goldman, Claudia V.; Morris, Robert (Technical Monitor)
2003-01-01
There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied up by a global reward function that depends on both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.
Markov state models of biomolecular conformational dynamics
Chodera, John D.; Noé, Frank
2014-01-01
It has recently become practical to construct Markov state models (MSMs) that reproduce the long-time statistical conformational dynamics of biomolecules using data from molecular dynamics simulations. MSMs can predict both stationary and kinetic quantities on long timescales (e.g. milliseconds) using a set of atomistic molecular dynamics simulations that are individually much shorter, thus addressing the well-known sampling problem in molecular dynamics simulation. In addition to providing predictive quantitative models, MSMs greatly facilitate both the extraction of insight into biomolecular mechanism (such as folding and functional dynamics) and quantitative comparison with single-molecule and ensemble kinetics experiments. A variety of methodological advances and software packages now bring the construction of these models closer to routine practice. Here, we review recent progress in this field, considering theoretical and methodological advances, new software tools, and recent applications of these approaches in several domains of biochemistry and biophysics, commenting on remaining challenges. PMID:24836551
Probabilistic Resilience in Hidden Markov Models
NASA Astrophysics Data System (ADS)
Panerati, Jacopo; Beltrame, Giovanni; Schwind, Nicolas; Zeltner, Stefan; Inoue, Katsumi
2016-05-01
Originally defined in the context of ecological systems and environmental sciences, resilience has grown to be a property of major interest for the design and analysis of many other complex systems: resilient networks and robotics systems other the desirable capability of absorbing disruption and transforming in response to external shocks, while still providing the services they were designed for. Starting from an existing formalization of resilience for constraint-based systems, we develop a probabilistic framework based on hidden Markov models. In doing so, we introduce two new important features: stochastic evolution and partial observability. Using our framework, we formalize a methodology for the evaluation of probabilities associated with generic properties, we describe an efficient algorithm for the computation of its essential inference step, and show that its complexity is comparable to other state-of-the-art inference algorithms.
Markov state models and molecular alchemy
NASA Astrophysics Data System (ADS)
Schütte, Christof; Nielsen, Adam; Weber, Marcus
2015-01-01
In recent years, Markov state models (MSMs) have attracted a considerable amount of attention with regard to modelling conformation changes and associated function of biomolecular systems. They have been used successfully, e.g. for peptides including time-resolved spectroscopic experiments, protein function and protein folding , DNA and RNA, and ligand-receptor interaction in drug design and more complicated multivalent scenarios. In this article, a novel reweighting scheme is introduced that allows to construct an MSM for certain molecular system out of an MSM for a similar system. This permits studying how molecular properties on long timescales differ between similar molecular systems without performing full molecular dynamics simulations for each system under consideration. The performance of the reweighting scheme is illustrated for simple test cases, including one where the main wells of the respective energy landscapes are located differently and an alchemical transformation of butane to pentane where the dimension of the state space is changed.
Multivariate Markov chain modeling for stock markets
NASA Astrophysics Data System (ADS)
Maskawa, Jun-ichi
2003-06-01
We study a multivariate Markov chain model as a stochastic model of the price changes of portfolios in the framework of the mean field approximation. The time series of price changes are coded into the sequences of up and down spins according to their signs. We start with the discussion for small portfolios consisting of two stock issues. The generalization of our model to arbitrary size of portfolio is constructed by a recurrence relation. The resultant form of the joint probability of the stationary state coincides with Gibbs measure assigned to each configuration of spin glass model. Through the analysis of actual portfolios, it has been shown that the synchronization of the direction of the price changes is well described by the model.
Growth and Dissolution of Macromolecular Markov Chains
NASA Astrophysics Data System (ADS)
Gaspard, Pierre
2016-07-01
The kinetics and thermodynamics of free living copolymerization are studied for processes with rates depending on k monomeric units of the macromolecular chain behind the unit that is attached or detached. In this case, the sequence of monomeric units in the growing copolymer is a kth-order Markov chain. In the regime of steady growth, the statistical properties of the sequence are determined analytically in terms of the attachment and detachment rates. In this way, the mean growth velocity as well as the thermodynamic entropy production and the sequence disorder can be calculated systematically. These different properties are also investigated in the regime of depolymerization where the macromolecular chain is dissolved by the surrounding solution. In this regime, the entropy production is shown to satisfy Landauer's principle.
Anatomy Ontology Matching Using Markov Logic Networks
Li, Chunhua; Zhao, Pengpeng; Wu, Jian; Cui, Zhiming
2016-01-01
The anatomy of model species is described in ontologies, which are used to standardize the annotations of experimental data, such as gene expression patterns. To compare such data between species, we need to establish relationships between ontologies describing different species. Ontology matching is a kind of solutions to find semantic correspondences between entities of different ontologies. Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. We combine several different matching strategies through first-order logic formulas according to the structure of anatomy ontologies. Experiments on the adult mouse anatomy and the human anatomy have demonstrated the effectiveness of proposed approach in terms of the quality of result alignment. PMID:27382498
Estimation and uncertainty of reversible Markov models
NASA Astrophysics Data System (ADS)
Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank
2015-11-01
Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software — http://pyemma.org — as of version 2.0.
Application of Gray Markov SCGM(1,1) c Model to Prediction of Accidents Deaths in Coal Mining.
Lan, Jian-Yi; Zhou, Ying
2014-01-01
The prediction of mine accident is the basis of aviation safety assessment and decision making. Gray prediction is suitable for such kinds of system objects with few data, short time, and little fluctuation, and Markov chain theory is just suitable for forecasting stochastic fluctuating dynamic process. Analyzing the coal mine accident human error cause, combining the advantages of both Gray prediction and Markov theory, an amended Gray Markov SCGM(1,1) c model is proposed. The gray SCGM(1,1) c model is applied to imitate the development tendency of the mine safety accident, and adopt the amended model to improve prediction accuracy, while Markov prediction is used to predict the fluctuation along the tendency. Finally, the new model is applied to forecast the mine safety accident deaths from 1990 to 2010 in China, and, 2011-2014 coal accidents deaths were predicted. The results show that the new model not only discovers the trend of the mine human error accident death toll but also overcomes the random fluctuation of data affecting precision. It possesses stronger engineering application.
Mohammed, Abdul-Wahid; Xu, Yang; Hu, Haixiao; Agyemang, Brighter
2016-09-21
In novel collaborative systems, cooperative entities collaborate services to achieve local and global objectives. With the growing pervasiveness of cyber-physical systems, however, such collaboration is hampered by differences in the operations of the cyber and physical objects, and the need for the dynamic formation of collaborative functionality given high-level system goals has become practical. In this paper, we propose a cross-layer automation and management model for cyber-physical systems. This models the dynamic formation of collaborative services pursuing laid-down system goals as an ontology-oriented hierarchical task network. Ontological intelligence provides the semantic technology of this model, and through semantic reasoning, primitive tasks can be dynamically composed from high-level system goals. In dealing with uncertainty, we further propose a novel bridge between hierarchical task networks and Markov logic networks, called the Markov task network. This leverages the efficient inference algorithms of Markov logic networks to reduce both computational and inferential loads in task decomposition. From the results of our experiments, high-precision service composition under uncertainty can be achieved using this approach.
Markov Task Network: A Framework for Service Composition under Uncertainty in Cyber-Physical Systems
Mohammed, Abdul-Wahid; Xu, Yang; Hu, Haixiao; Agyemang, Brighter
2016-01-01
In novel collaborative systems, cooperative entities collaborate services to achieve local and global objectives. With the growing pervasiveness of cyber-physical systems, however, such collaboration is hampered by differences in the operations of the cyber and physical objects, and the need for the dynamic formation of collaborative functionality given high-level system goals has become practical. In this paper, we propose a cross-layer automation and management model for cyber-physical systems. This models the dynamic formation of collaborative services pursuing laid-down system goals as an ontology-oriented hierarchical task network. Ontological intelligence provides the semantic technology of this model, and through semantic reasoning, primitive tasks can be dynamically composed from high-level system goals. In dealing with uncertainty, we further propose a novel bridge between hierarchical task networks and Markov logic networks, called the Markov task network. This leverages the efficient inference algorithms of Markov logic networks to reduce both computational and inferential loads in task decomposition. From the results of our experiments, high-precision service composition under uncertainty can be achieved using this approach. PMID:27657084
Xu, Long; Zhao, Hua; Xu, Caixia; Zhang, Siqi; Zhang, Jingwen
2014-08-14
Multi-mode random lasing action and weak localization of light were evidenced and studied in normally transparent but disordered Nd{sup 3+} doped (Pb,La)(Zr,Ti)O{sub 3} ceramics. Noticeable localized zone and multi-photon process were observed under strong pumping power. A tentative phenomenological physical picture was proposed by taking account of diffusive process, photo-induced scattering, and optical energy storage process as dominant factors in elucidating the weak localization of light observed. Both the decreased transmittance (increased reflectivity) of light and the observed long lasting fading-off phenomenon supported the physical picture proposed by us.
Alexander, Abraham; Crook, Juanita; Jones, Stuart; Malone, Shawn; Bowen, Julie; Truong, Pauline; Pai, Howard; Ludgate, Charles
2010-01-15
Purpose: To ascertain whether biochemical response to neoadjuvant androgen-deprivation therapy (ADT) before radiotherapy (RT), rather than duration, is the critical determinant of benefit in the multimodal treatment of localized prostate cancer, by comparing outcomes of subjects from the Canadian multicenter 3- vs 8-month trial with a pre-RT, post-hormone PSA (PRPH-PSA) <=0.1 ng/ml vs those >0.1 ng/ml. Methods and Materials: From 1995 to 2001, 378 men with localized prostate cancer were randomized to 3 or 8 months of neoadjuvant ADT before RT. On univariate analysis, survival indices were compared between those with a PRPH-PSA <=0.1 ng/ml vs >0.1 ng/ml, for all patients and subgroups, including treatment arm, risk group, and gleason Score. Multivariate analysis identified independent predictors of outcome. Results: Biochemical disease-free survival (bDFS) was significantly higher for those with a PRPH-PSA <=0.1 ng/ml compared with PRPH-PSA >0.1 ng/ml (55.3% vs 49.4%, p = 0.014). No difference in survival indices was observed between treatment arms. There was no difference in bDFS between patients in the 3- and 8-month arms with a PRPH-PSA <=0.1 ng/ml nor those with PRPH-PSA >0.1 ng/ml. bDFS was significantly higher for high-risk patients with PRPH-PSA <=0.1 ng/ml compared with PRPH-PSA >0.1 ng/ml (57.0% vs 29.4%, p = 0.017). Multivariate analysis identified PRPH-PSA (p = 0.041), Gleason score (p = 0.001), initial PSA (p = 0.025), and T-stage (p = 0.003), not ADT duration, as independent predictors of outcome. Conclusion: Biochemical response to neoadjuvant ADT before RT, not duration, appears to be the critical determinant of benefit in the setting of combined therapy. Individually tailored ADT duration based on PRPH-PSA would maximize therapeutic gain, while minimizing the duration of ADT and its related toxicities.
Performability analysis using semi-Markov reward processes
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Marie, Raymond A.; Sericola, Bruno; Trivedi, Kishor S.
1990-01-01
Beaudry (1978) proposed a simple method of computing the distribution of performability in a Markov reward process. Two extensions of Beaudry's approach are presented. The method is generalized to a semi-Markov reward process by removing the restriction requiring the association of zero reward to absorbing states only. The algorithm proceeds by replacing zero-reward nonabsorbing states by a probabilistic switch; it is therefore related to the elimination of vanishing states from the reachability graph of a generalized stochastic Petri net and to the elimination of fast transient states in a decomposition approach to stiff Markov chains. The use of the approach is illustrated with three applications.
NASA Astrophysics Data System (ADS)
Feng, Jie; Ding, Ruiqiang; Li, Jianping; Liu, Deqiang
2016-09-01
The breeding method has been widely used to generate ensemble perturbations in ensemble forecasting due to its simple concept and low computational cost. This method produces the fastest growing perturbation modes to catch the growing components in analysis errors. However, the bred vectors (BVs) are evolved on the same dynamical flow, which may increase the dependence of perturbations. In contrast, the nonlinear local Lyapunov vector (NLLV) scheme generates flow-dependent perturbations as in the breeding method, but regularly conducts the Gram-Schmidt reorthonormalization processes on the perturbations. The resulting NLLVs span the fast-growing perturbation subspace efficiently, and thus may grasp more components in analysis errors than the BVs. In this paper, the NLLVs are employed to generate initial ensemble perturbations in a barotropic quasi-geostrophic model. The performances of the ensemble forecasts of the NLLV method are systematically compared to those of the random perturbation (RP) technique, and the BV method, as well as its improved version—the ensemble transform Kalman filter (ETKF) method. The results demonstrate that the RP technique has the worst performance in ensemble forecasts, which indicates the importance of a flow-dependent initialization scheme. The ensemble perturbation subspaces of the NLLV and ETKF methods are preliminarily shown to catch similar components of analysis errors, which exceed that of the BVs. However, the NLLV scheme demonstrates slightly higher ensemble forecast skill than the ETKF scheme. In addition, the NLLV scheme involves a significantly simpler algorithm and less computation time than the ETKF method, and both demonstrate better ensemble forecast skill than the BV scheme.
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
Bayesian Hidden Markov Modeling of Array CGH Data.
Guha, Subharup; Li, Yi; Neuberg, Donna
2008-06-01
Genomic alterations have been linked to the development and progression of cancer. The technique of comparative genomic hybridization (CGH) yields data consisting of fluorescence intensity ratios of test and reference DNA samples. The intensity ratios provide information about the number of copies in DNA. Practical issues such as the contamination of tumor cells in tissue specimens and normalization errors necessitate the use of statistics for learning about the genomic alterations from array CGH data. As increasing amounts of array CGH data become available, there is a growing need for automated algorithms for characterizing genomic profiles. Specifically, there is a need for algorithms that can identify gains and losses in the number of copies based on statistical considerations, rather than merely detect trends in the data.We adopt a Bayesian approach, relying on the hidden Markov model to account for the inherent dependence in the intensity ratios. Posterior inferences are made about gains and losses in copy number. Localized amplifications (associated with oncogene mutations) and deletions (associated with mutations of tumor suppressors) are identified using posterior probabilities. Global trends such as extended regions of altered copy number are detected. Because the posterior distribution is analytically intractable, we implement a Metropolis-within-Gibbs algorithm for efficient simulation-based inference. Publicly available data on pancreatic adenocarcinoma, glioblastoma multiforme, and breast cancer are analyzed, and comparisons are made with some widely used algorithms to illustrate the reliability and success of the technique.
NASA Astrophysics Data System (ADS)
Malyshev, V. A.
1998-04-01
Contents § 1. Definitions1.1. Grammars1.2. Random grammars and L-systems1.3. Semigroup representations § 2. Infinite string dynamics2.1. Cluster expansion2.2. Cluster dynamics2.3. Local observer § 3. Large time behaviour: small perturbations3.1. Invariant measures3.2. Classification § 4. Large time behaviour: context free case4.1. Invariant measures for grammars4.2. L-systems4.3. Fractal correlation functions4.4. Measures on languages Bibliography
NASA Astrophysics Data System (ADS)
Numazawa, Satoshi; Smith, Roger
2011-10-01
Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.
Numazawa, Satoshi; Smith, Roger
2011-10-01
Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.
Is random access memory random?
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.
A novel image encryption algorithm based on chaos maps with Markov properties
NASA Astrophysics Data System (ADS)
Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang
2015-02-01
In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.
Benchmark solutions for transport in d-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Hugot, François-Xavier; Malvagi, Fausto; Mazzolo, Alain; Zoia, Andrea
2017-03-01
Linear particle transport in stochastic media is key to such relevant applications as neutron diffusion in randomly mixed immiscible materials, light propagation through engineered optical materials, and inertial confinement fusion, only to name a few. We extend the pioneering work by Adams, Larsen and Pomraning [1] (recently revisited by Brantley [2]) by considering a series of benchmark configurations for mono-energetic and isotropic transport through Markov binary mixtures in dimension d. The stochastic media are generated by resorting to Poisson random tessellations in 1 d slab, 2 d extruded, and full 3 d geometry. For each realization, particle transport is performed by resorting to the Monte Carlo simulation. The distributions of the transmission and reflection coefficients on the free surfaces of the geometry are subsequently estimated, and the average values over the ensemble of realizations are computed. Reference solutions for the benchmark have never been provided before for two- and three-dimensional Poisson tessellations, and the results presented in this paper might thus be useful in order to validate fast but approximated models for particle transport in Markov stochastic media, such as the celebrated Chord Length Sampling algorithm.
Reversible jump Markov chain Monte Carlo for Bayesian deconvolution of point sources
NASA Astrophysics Data System (ADS)
Stawinski, Guillaume; Doucet, Arnaud; Duvaut, Patrick
1998-09-01
In this article, we address the problem of Bayesian deconvolution of point sources in nuclear imaging under the assumption of Poissonian statistics. The observed image is the result of the convolution by a known point spread function of an unknown number of point sources with unknown parameters. To detect the number of sources and estimate their parameters we follow a Bayesian approach. However, instead of using a classical low level prior model based on Markov random fields, we prose a high-level model which describes the picture as a list of its constituent objects, rather than as a list of pixels on which the data are recorded. More precisely, each source is assumed to have a circular Gaussian shape and we set a prior distribution on the number of sources, on their locations and on the amplitude and width deviation of the Gaussian shape. This high-level model has far less parameters than a Markov random field model as only s small number of sources are usually present. The Bayesian model being defined, all inference is based on the resulting posterior distribution. This distribution does not admit any closed-form analytical expression. We present here a Reversible Jump MCMC algorithm for its estimation. This algorithm is tested on both synthetic and real data.
Fraunhofer diffraction by a random screen.
Malinka, Aleksey V
2011-08-01
The stochastic approach is applied to the problem of Fraunhofer diffraction by a random screen. The diffraction pattern is expressed through the random chord distribution. Two cases are considered: the sparse ensemble, where the interference between different obstacles can be neglected, and the densely packed ensemble, where this interference is to be taken into account. The solution is found for the general case and the analytical formulas are obtained for the Switzer model of a random screen, i.e., for the case of Markov statistics.
NonMarkov Ito Processes with 1- state memory
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.
2010-08-01
A Markov process, by definition, cannot depend on any previous state other than the last observed state. An Ito process implies the Fokker-Planck and Kolmogorov backward time partial differential eqns. for transition densities, which in turn imply the Chapman-Kolmogorov eqn., but without requiring the Markov condition. We present a class of Ito process superficially resembling Markov processes, but with 1-state memory. In finance, such processes would obey the efficient market hypothesis up through the level of pair correlations. These stochastic processes have been mislabeled in recent literature as 'nonlinear Markov processes'. Inspired by Doob and Feller, who pointed out that the ChapmanKolmogorov eqn. is not restricted to Markov processes, we exhibit a Gaussian Ito transition density with 1-state memory in the drift coefficient that satisfies both of Kolmogorov's partial differential eqns. and also the Chapman-Kolmogorov eqn. In addition, we show that three of the examples from McKean's seminal 1966 paper are also nonMarkov Ito processes. Last, we show that the transition density of the generalized Black-Scholes type partial differential eqn. describes a martingale, and satisfies the ChapmanKolmogorov eqn. This leads to the shortest-known proof that the Green function of the Black-Scholes eqn. with variable diffusion coefficient provides the so-called martingale measure of option pricing.
Gandhi, Ajeet Kumar; Sharma, Daya Nand; Rath, Goura Kisor; Julka, Pramod Kumar; Subramani, Vellaiyan; Sharma, Seema; Manigandan, Durai; Laviraj, M.A.; Kumar, Sunesh; Thulkar, Sanjay
2013-11-01
Purpose: To evaluate the toxicity and clinical outcome in patients with locally advanced cervical cancer (LACC) treated with whole pelvic conventional radiation therapy (WP-CRT) versus intensity modulated radiation therapy (WP-IMRT). Methods and Materials: Between January 2010 and January 2012, 44 patients with International Federation of Gynecology and Obstetrics (FIGO 2009) stage IIB-IIIB squamous cell carcinoma of the cervix were randomized to receive 50.4 Gy in 28 fractions delivered via either WP-CRT or WP-IMRT with concurrent weekly cisplatin 40 mg/m{sup 2}. Acute toxicity was graded according to the Common Terminology Criteria for Adverse Events, version 3.0, and late toxicity was graded according to the Radiation Therapy Oncology Group system. The primary and secondary endpoints were acute gastrointestinal toxicity and disease-free survival, respectively. Results: Of 44 patients, 22 patients received WP-CRT and 22 received WP-IMRT. In the WP-CRT arm, 13 patients had stage IIB disease and 9 had stage IIIB disease; in the IMRT arm, 12 patients had stage IIB disease and 10 had stage IIIB disease. The median follow-up time in the WP-CRT arm was 21.7 months (range, 10.7-37.4 months), and in the WP-IMRT arm it was 21.6 months (range, 7.7-34.4 months). At 27 months, disease-free survival was 79.4% in the WP-CRT group versus 60% in the WP-IMRT group (P=.651), and overall survival was 76% in the WP-CRT group versus 85.7% in the WP-IMRT group (P=.645). Patients in the WP-IMRT arm experienced significantly fewer grade ≥2 acute gastrointestinal toxicities (31.8% vs 63.6%, P=.034) and grade ≥3 gastrointestinal toxicities (4.5% vs 27.3%, P=.047) than did patients receiving WP-CRT and had less chronic gastrointestinal toxicity (13.6% vs 50%, P=.011). Conclusion: WP-IMRT is associated with significantly less toxicity compared with WP-CRT and has a comparable clinical outcome. Further studies with larger sample sizes and longer follow-up times are warranted to justify
Ghadjar, Pirus; Simcock, Mathew; Studer, Gabriela; Allal, Abdelkarim S.; Ozsahin, Mahmut; Bernier, Jacques; Toepfer, Michael; Zimmermann, Frank; Betz, Michael; Glanzmann, Christoph; Aebersold, Daniel M.
2012-02-01
Purpose: To compare the long-term outcome of treatment with concomitant cisplatin and hyperfractionated radiotherapy versus treatment with hyperfractionated radiotherapy alone in patients with locally advanced head and neck cancer. Methods and Materials: From July 1994 to July 2000, a total of 224 patients with squamous cell carcinoma of the head and neck were randomized to receive either hyperfractionated radiotherapy alone (median total dose, 74.4 Gy; 1.2 Gy twice daily; 5 days per week) or the same radiotherapy combined with two cycles of cisplatin (20 mg/m{sup 2} for 5 consecutive days during weeks 1 and 5). The primary endpoint was the time to any treatment failure; secondary endpoints were locoregional failure, metastatic failure, overall survival, and late toxicity assessed according to Radiation Therapy Oncology Group criteria. Results: Median follow-up was 9.5 years (range, 0.1-15.4 years). Median time to any treatment failure was not significantly different between treatment arms (hazard ratio [HR], 1.2 [95% confidence interval {l_brace}CI{r_brace}, 0.9-1.7; p = 0.17]). Rates of locoregional failure-free survival (HR, 1.5 [95% CI, 1.1-2.1; p = 0.02]), distant metastasis-free survival (HR, 1.6 [95% CI, 1.1-2.5; p = 0.02]), and cancer-specific survival (HR, 1.6 [95% CI, 1.0-2.5; p = 0.03]) were significantly improved in the combined-treatment arm, with no difference in major late toxicity between treatment arms. However, overall survival was not significantly different (HR, 1.3 [95% CI, 0.9-1.8; p = 0.11]). Conclusions: After long-term follow-up, combined-treatment with cisplatin and hyperfractionated radiotherapy maintained improved rates of locoregional control, distant metastasis-free survival, and cancer-specific survival compared to that of hyperfractionated radiotherapy alone, with no difference in major late toxicity.
Finitely approximable random sets and their evolution via differential equations
NASA Astrophysics Data System (ADS)
Ananyev, B. I.
2016-12-01
In this paper, random closed sets (RCS) in Euclidean space are considered along with their distributions and approximation. Distributions of RCS may be used for the calculation of expectation and other characteristics. Reachable sets on initial data and some ways of their approximate evolutionary description are investigated for stochastic differential equations (SDE) with initial state in some RCS. Markov property of random reachable sets is proved in the space of closed sets. For approximate calculus, the initial RCS is replaced by a finite set on the integer multidimensional grid and the multistage Markov chain is substituted for SDE. The Markov chain is constructed by methods of SDE numerical integration. Some examples are also given.
Markov state modeling of sliding friction.
Pellegrini, F; Landes, François P; Laio, A; Prestipino, S; Tosatti, E
2016-11-01
Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.
Markov state modeling of sliding friction
NASA Astrophysics Data System (ADS)
Pellegrini, F.; Landes, François P.; Laio, A.; Prestipino, S.; Tosatti, E.
2016-11-01
Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.
Stochastic motif extraction using hidden Markov model
Fujiwara, Yukiko; Asogawa, Minoru; Konagaya, Akihiko
1994-12-31
In this paper, we study the application of an HMM (hidden Markov model) to the problem of representing protein sequences by a stochastic motif. A stochastic protein motif represents the small segments of protein sequences that have a certain function or structure. The stochastic motif, represented by an HMM, has conditional probabilities to deal with the stochastic nature of the motif. This HMM directive reflects the characteristics of the motif, such as a protein periodical structure or grouping. In order to obtain the optimal HMM, we developed the {open_quotes}iterative duplication method{close_quotes} for HMM topology learning. It starts from a small fully-connected network and iterates the network generation and parameter optimization until it achieves sufficient discrimination accuracy. Using this method, we obtained an HMM for a leucine zipper motif. Compared to the accuracy of a symbolic pattern representation with accuracy of 14.8 percent, an HMM achieved 79.3 percent in prediction. Additionally, the method can obtain an HMM for various types of zinc finger motifs, and it might separate the mixed data. We demonstrated that this approach is applicable to the validation of the protein databases; a constructed HMM b as indicated that one protein sequence annotated as {open_quotes}lencine-zipper like sequence{close_quotes} in the database is quite different from other leucine-zipper sequences in terms of likelihood, and we found this discrimination is plausible.
Markov source model for printed music decoding
NASA Astrophysics Data System (ADS)
Kopec, Gary E.; Chou, Philip A.; Maltz, David A.
1995-03-01
This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.
Manpower planning using Markov Chain model
NASA Astrophysics Data System (ADS)
Saad, Syafawati Ab; Adnan, Farah Adibah; Ibrahim, Haslinda; Rahim, Rahela
2014-07-01
Manpower planning is a planning model which understands the flow of manpower based on the policies changes. For such purpose, numerous attempts have been made by researchers to develop a model to investigate the track of movements of lecturers for various universities. As huge number of lecturers in a university, it is difficult to track the movement of lecturers and also there is no quantitative way used in tracking the movement of lecturers. This research is aimed to determine the appropriate manpower model to understand the flow of lecturers in a university in Malaysia by determine the probability and mean time of lecturers remain in the same status rank. In addition, this research also intended to estimate the number of lecturers in different status rank (lecturer, senior lecturer and associate professor). From the previous studies, there are several methods applied in manpower planning model and appropriate method used in this research is Markov Chain model. Results obtained from this study indicate that the appropriate manpower planning model used is validated by compare to the actual data. The smaller margin of error gives a better result which means that the projection is closer to actual data. These results would give some suggestions for the university to plan the hiring lecturers and budgetary for university in future.
Hidden Markov models in automatic speech recognition
NASA Astrophysics Data System (ADS)
Wrzoskowicz, Adam
1993-11-01
This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.
Clustering metagenomic sequences with interpolated Markov models
2010-01-01
Background Sequencing of environmental DNA (often called metagenomics) has shown tremendous potential to uncover the vast number of unknown microbes that cannot be cultured and sequenced by traditional methods. Because the output from metagenomic sequencing is a large set of reads of unknown origin, clustering reads together that were sequenced from the same species is a crucial analysis step. Many effective approaches to this task rely on sequenced genomes in public databases, but these genomes are a highly biased sample that is not necessarily representative of environments interesting to many metagenomics projects. Results We present SCIMM (Sequence Clustering with Interpolated Markov Models), an unsupervised sequence clustering method. SCIMM achieves greater clustering accuracy than previous unsupervised approaches. We examine the limitations of unsupervised learning on complex datasets, and suggest a hybrid of SCIMM and supervised learning method Phymm called PHYSCIMM that performs better when evolutionarily close training genomes are available. Conclusions SCIMM and PHYSCIMM are highly accurate methods to cluster metagenomic sequences. SCIMM operates entirely unsupervised, making it ideal for environments containing mostly novel microbes. PHYSCIMM uses supervised learning to improve clustering in environments containing microbial strains from well-characterized genera. SCIMM and PHYSCIMM are available open source from http://www.cbcb.umd.edu/software/scimm. PMID:21044341
Relativized hierarchical decomposition of Markov decision processes.
Ravindran, B
2013-01-01
Reinforcement Learning (RL) is a popular paradigm for sequential decision making under uncertainty. A typical RL algorithm operates with only limited knowledge of the environment and with limited feedback on the quality of the decisions. To operate effectively in complex environments, learning agents require the ability to form useful abstractions, that is, the ability to selectively ignore irrelevant details. It is difficult to derive a single representation that is useful for a large problem setting. In this chapter, we describe a hierarchical RL framework that incorporates an algebraic framework for modeling task-specific abstraction. The basic notion that we will explore is that of a homomorphism of a Markov Decision Process (MDP). We mention various extensions of the basic MDP homomorphism framework in order to accommodate different commonly understood notions of abstraction, namely, aspects of selective attention. Parts of the work described in this chapter have been reported earlier in several papers (Narayanmurthy and Ravindran, 2007, 2008; Ravindran and Barto, 2002, 2003a,b; Ravindran et al., 2007).
Accelerating Information Retrieval from Profile Hidden Markov Model Databases
Ashhab, Yaqoub; Tamimi, Hashem
2016-01-01
Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases. PMID:27875548
Accelerating Information Retrieval from Profile Hidden Markov Model Databases.
Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem
2016-01-01
Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.
Zhang, Qiang; Snow Jones, Alison; Rijmen, Frank; Ip, Edward H.
2016-01-01
Many studies in the social and behavioral sciences involve multivariate discrete measurements, which are often characterized by the presence of an underlying individual trait, the existence of clusters such as domains of measurements, and the availability of multiple waves of cohort data. Motivated by an application in child development, we propose a class of extended multivariate discrete hidden Markov models for analyzing domain-based measurements of cognition and behavior. A random effects model is used to capture the long-term trait. Additionally, we develop a model selection criterion based on the Bayes factor for the extended hidden Markov model. The National Longitudinal Survey of Youth (NLSY) is used to illustrate the methods. Supplementary technical details and computer codes are available online. PMID:28066134
NASA Astrophysics Data System (ADS)
Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.
2016-06-01
We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.
An adaptive random search for short term generation scheduling with network constraints
Velasco, Jonás; Selley, Héctor J.
2017-01-01
This paper presents an adaptive random search approach to address a short term generation scheduling with network constraints, which determines the startup and shutdown schedules of thermal units over a given planning horizon. In this model, we consider the transmission network through capacity limits and line losses. The mathematical model is stated in the form of a Mixed Integer Non Linear Problem with binary variables. The proposed heuristic is a population-based method that generates a set of new potential solutions via a random search strategy. The random search is based on the Markov Chain Monte Carlo method. The main key of the proposed method is that the noise level of the random search is adaptively controlled in order to exploring and exploiting the entire search space. In order to improve the solutions, we consider coupling a local search into random search process. Several test systems are presented to evaluate the performance of the proposed heuristic. We use a commercial optimizer to compare the quality of the solutions provided by the proposed method. The solution of the proposed algorithm showed a significant reduction in computational effort with respect to the full-scale outer approximation commercial solver. Numerical results show the potential and robustness of our approach. PMID:28234954
An adaptive random search for short term generation scheduling with network constraints.
Marmolejo, J A; Velasco, Jonás; Selley, Héctor J
2017-01-01
This paper presents an adaptive random search approach to address a short term generation scheduling with network constraints, which determines the startup and shutdown schedules of thermal units over a given planning horizon. In this model, we consider the transmission network through capacity limits and line losses. The mathematical model is stated in the form of a Mixed Integer Non Linear Problem with binary variables. The proposed heuristic is a population-based method that generates a set of new potential solutions via a random search strategy. The random search is based on the Markov Chain Monte Carlo method. The main key of the proposed method is that the noise level of the random search is adaptively controlled in order to exploring and exploiting the entire search space. In order to improve the solutions, we consider coupling a local search into random search process. Several test systems are presented to evaluate the performance of the proposed heuristic. We use a commercial optimizer to compare the quality of the solutions provided by the proposed method. The solution of the proposed algorithm showed a significant reduction in computational effort with respect to the full-scale outer approximation commercial solver. Numerical results show the potential and robustness of our approach.
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
Encoding dynamics for multiscale community detection: Markov time sweeping for the map equation
NASA Astrophysics Data System (ADS)
Schaub, Michael T.; Lambiotte, Renaud; Barahona, Mauricio
2012-08-01
The detection of community structure in networks is intimately related to finding a concise description of the network in terms of its modules. This notion has been recently exploited by the map equation formalism [Rosvall and Bergstrom, Proc. Natl. Acad. Sci. USAPNASA60027-842410.1073/pnas.0706851105 105, 1118 (2008)] through an information-theoretic description of the process of coding inter- and intracommunity transitions of a random walker in the network at stationarity. However, a thorough study of the relationship between the full Markov dynamics and the coding mechanism is still lacking. We show here that the original map coding scheme, which is both block-averaged and one-step, neglects the internal structure of the communities and introduces an upper scale, the “field-of-view” limit, in the communities it can detect. As a consequence, map is well tuned to detect clique-like communities but can lead to undesirable overpartitioning when communities are far from clique-like. We show that a signature of this behavior is a large compression gap: The map description length is far from its ideal limit. To address this issue, we propose a simple dynamic approach that introduces time explicitly into the map coding through the analysis of the weighted adjacency matrix of the time-dependent multistep transition matrix of the Markov process. The resulting Markov time sweeping induces a dynamical zooming across scales that can reveal (potentially multiscale) community structure above the field-of-view limit, with the relevant partitions indicated by a small compression gap.
Random Breakage of a Rod into Unit Lengths
ERIC Educational Resources Information Center
Gani, Joe; Swift, Randall
2011-01-01
In this article we consider the random breakage of a rod into "L" unit elements and present a Markov chain based method that tracks intermediate breakage configurations. The probability of the time to final breakage for L = 3, 4, 5 is obtained and the method is shown to extend in principle, beyond L = 5.
Chao, Jianqian; Zong, Mengmeng; Xu, Hui; Yu, Qing; Jiang, Lili; Li, Yunyun; Song, Long; Liu, Pei
2014-01-01
The aim of this study was to assess the long-term effects of community-based health management on elderly diabetic patients using a Markov model. A Markov decision model was used to simulate the natural history of diabetes. Data were obtained from our randomized trials of elderly with type 2 diabetes and from the published literature. One hundred elderly patients with type 2 diabetes were randomly allocated to either the management or the control group in a one-to-one ratio. The management group participated in a health management program for 18 months in addition to receiving usual care. The control group only received usual care. Measurements were performed on both groups at baseline and after 18 months. The Markov model predicted that for every 1000 diabetic patients receiving health management, approximately 123 diabetic patients would avoid complications, and approximately 37 would avoid death over the next 13 years. The results suggest that the health management program had a positive long-term effect on the health of elderly diabetic patients. The Markov model appears to be useful in health care planning and decision-making aimed at reducing the financial and social burden of diabetes.
NASA Astrophysics Data System (ADS)
Shivakiran Bhaktha, B. N.; Bachelard, Nicolas; Noblin, Xavier; Sebbah, Patrick
2012-10-01
Random lasing is reported in a dye-circulated structured polymeric microfluidic channel. The role of disorder, which results from limited accuracy of photolithographic process, is demonstrated by the variation of the emission spectrum with local-pump position and by the extreme sensitivity to a local perturbation of the structure. Thresholds comparable to those of conventional microfluidic lasers are achieved, without the hurdle of state-of-the-art cavity fabrication. Potential applications of optofluidic random lasers for on-chip sensors are discussed. Introduction of random lasers in the field of optofluidics is a promising alternative to on-chip laser integration with light and fluidic functionalities.
Reliability calculation using randomization for Markovian fault-tolerant computing systems
NASA Technical Reports Server (NTRS)
Miller, D. R.
1982-01-01
The randomization technique for computing transient probabilities of Markov processes is presented. The technique is applied to a Markov process model of a simplified fault tolerant computer system for illustrative purposes. It is applicable to much larger and more complex models. Transient state probabilities are computed, from which reliabilities are derived. An accelerated version of the randomization algorithm is developed which exploits ''stiffness' of the models to gain increased efficiency. A great advantage of the randomization approach is that it easily allows probabilities and reliabilities to be computed to any predetermined accuracy.
MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes
Williams, B.K.
1988-01-01
Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.
Hidden Markov Models for Fault Detection in Dynamic Systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic
1994-01-01
Continuous monitoring of complex dynamic systems is an increasingly important issue in diverse areas such as nuclear plant safety, production line reliability, and medical health monitoring systems. Recent advances in both sensor technology and computational capabilities have made on-line permanent monitoring much more feasible than it was in the past. In this paper it is shown that a pattern recognition system combined with a finite-state hidden Markov model provides a particularly useful method for modelling temporal context in continuous monitoring. The parameters of the Markov model are derived from gross failure statistics such as the mean time between failures. The model is validated on a real-world fault diagnosis problem and it is shown that Markov modelling in this context offers significant practical benefits.
Bayesian restoration of ion channel records using hidden Markov models.
Rosales, R; Stark, J A; Fitzgerald, W J; Hladky, S B
2001-03-01
Hidden Markov models have been used to restore recorded signals of single ion channels buried in background noise. Parameter estimation and signal restoration are usually carried out through likelihood maximization by using variants of the Baum-Welch forward-backward procedures. This paper presents an alternative approach for dealing with this inferential task. The inferences are made by using a combination of the framework provided by Bayesian statistics and numerical methods based on Markov chain Monte Carlo stochastic simulation. The reliability of this approach is tested by using synthetic signals of known characteristics. The expectations of the model parameters estimated here are close to those calculated using the Baum-Welch algorithm, but the present methods also yield estimates of their errors. Comparisons of the results of the Bayesian Markov Chain Monte Carlo approach with those obtained by filtering and thresholding demonstrate clearly the superiority of the new methods.
A HIERARCHICAL HIDDEN MARKOV DETERIORATION MODEL FOR PAVEMENT STRUCTURE
NASA Astrophysics Data System (ADS)
Kobayashi, Kiyoshi; Kaito, Kiyoyuki; Eguchi, Toshiyuki; Ohi, Akira; Okizuka, Ryosuke
The deterioration process of pavement is a complex process including the deterioration of road surface and the decrease in load bearing capacity of the entire pavement. The decrease in load bearing capacity influences the deterioration rate of road surface. The soundness of road surface can be observed by a road surface condition investigation. On the other hand, the decrease in load bearing capacity can be partially observed through the FWD testing, etc. In this study, such a deterioration process of road surface is described as a mixed Markov process that depends on the load bearing capacity of pavement. Then, the complex deterioration process, which is composed of the deterioration of road surface and the decrease in load bearing capacity of pavement, is expressed as a hierarchical hidden Markov deterioration model. Through a case study of the application into the expressway, a hierarchical hidden Markov deterioration model is estimated, and its applicability and effectiveness are empirically discussed.
Influence of credit scoring on the dynamics of Markov chain
NASA Astrophysics Data System (ADS)
Galina, Timofeeva
2015-11-01
Markov processes are widely used to model the dynamics of a credit portfolio and forecast the portfolio risk and profitability. In the Markov chain model the loan portfolio is divided into several groups with different quality, which determined by presence of indebtedness and its terms. It is proposed that dynamics of portfolio shares is described by a multistage controlled system. The article outlines mathematical formalization of controls which reflect the actions of the bank's management in order to improve the loan portfolio quality. The most important control is the organization of approval procedure of loan applications. The credit scoring is studied as a control affecting to the dynamic system. Different formalizations of "good" and "bad" consumers are proposed in connection with the Markov chain model.
Dynamical Systems Based Non Equilibrium Statistical Mechanics for Markov Chains
NASA Astrophysics Data System (ADS)
Prevost, Mireille
We introduce an abstract framework concerning non-equilibrium statistical mechanics in the specific context of Markov chains. This framework encompasses both the Evans-Searles and the Gallavotti-Cohen fluctuation theorems. To support and expand on these concepts, several results are proven, among which a central limit theorem and a large deviation principle. The interest for Markov chains is twofold. First, they model a great variety of physical systems. Secondly, their simplicity allows for an easy introduction to an otherwise complicated field encompassing the statistical mechanics of Anosov and Axiom A diffeomorphisms. We give two examples relating the present framework to physical cases modelled by Markov chains. One of these concerns chemical reactions and links key concepts from the framework to their well known physical counterpart.
Markov chain solution of photon multiple scattering through turbid slabs.
Lin, Ying; Northrop, William F; Li, Xuesong
2016-11-14
This work introduces a Markov Chain solution to model photon multiple scattering through turbid slabs via anisotropic scattering process, i.e., Mie scattering. Results show that the proposed Markov Chain model agree with commonly used Monte Carlo simulation for various mediums such as medium with non-uniform phase functions and absorbing medium. The proposed Markov Chain solution method successfully converts the complex multiple scattering problem with practical phase functions into a matrix form and solves transmitted/reflected photon angular distributions by matrix multiplications. Such characteristics would potentially allow practical inversions by matrix manipulation or stochastic algorithms where widely applied stochastic methods such as Monte Carlo simulations usually fail, and thus enable practical diagnostics reconstructions such as medical diagnosis, spray analysis, and atmosphere sciences.
Markov sequential pattern recognition : dependency and the unknown class.
Malone, Kevin Thomas; Haschke, Greg Benjamin; Koch, Mark William
2004-10-01
The sequential probability ratio test (SPRT) minimizes the expected number of observations to a decision and can solve problems in sequential pattern recognition. Some problems have dependencies between the observations, and Markov chains can model dependencies where the state occupancy probability is geometric. For a non-geometric process we show how to use the effective amount of independent information to modify the decision process, so that we can account for the remaining dependencies. Along with dependencies between observations, a successful system needs to handle the unknown class in unconstrained environments. For example, in an acoustic pattern recognition problem any sound source not belonging to the target set is in the unknown class. We show how to incorporate goodness of fit (GOF) classifiers into the Markov SPRT, and determine the worse case nontarget model. We also develop a multiclass Markov SPRT using the GOF concept.
NASA Astrophysics Data System (ADS)
Faggionato, Alessandra; di Pietro, Daniele
2011-04-01
We slightly extend the fluctuation theorem obtained in (Lebowitz and Spohn in J. Stat. Phys. 95:333-365, 1999) for sums of generators, considering continuous-time Markov chains on a finite state space whose underlying graph has multiple edges and no loop. This extended frame is suited when analyzing chemical systems. As simple corollary we derive by a different method the fluctuation theorem of D. Andrieux and P. Gaspard for the fluxes along the chords associated to a fundamental set of oriented cycles (Andrieux and Gaspard in J. Stat. Phys. 127:107-131, 2007). We associate to each random trajectory an oriented cycle on the graph and we decompose it in terms of a basis of oriented cycles. We prove a fluctuation theorem for the coefficients in this decomposition. The resulting fluctuation theorem involves the cycle affinities, which in many real systems correspond to the macroscopic forces. In addition, the above decomposition is useful when analyzing the large deviations of additive functionals of the Markov chain. As example of application, in a very general context we derive a fluctuation relation for the mechanical and chemical currents of a molecular motor moving along a periodic filament.
Sampling graphs with a prescribed joint degree distribution using Markov Chains.
Pinar, Ali; Stanton, Isabelle
2010-10-01
One of the most influential results in network analysis is that many natural networks exhibit a power-law or log-normal degree distribution. This has inspired numerous generative models that match this property. However, more recent work has shown that while these generative models do have the right degree distribution, they are not good models for real life networks due to their differences on other important metrics like conductance. We believe this is, in part, because many of these real-world networks have very different joint degree distributions, i.e. the probability that a randomly selected edge will be between nodes of degree k and l. Assortativity is a sufficient statistic of the joint degree distribution, and it has been previously noted that social networks tend to be assortative, while biological and technological networks tend to be disassortative. We suggest that the joint degree distribution of graphs is an interesting avenue of study for further research into network structure. We provide a simple greedy algorithm for constructing simple graphs from a given joint degree distribution, and a Monte Carlo Markov Chain method for sampling them. We also show that the state space of simple graphs with a fixed degree distribution is connected via endpoint switches. We empirically evaluate the mixing time of this Markov Chain by using experiments based on the autocorrelation of each edge.
Syed, Sheyum; Müllner, Fiona E; Selvin, Paul R; Sigworth, Fred J
2010-12-01
Unbiased interpretation of noisy single molecular motor recordings remains a challenging task. To address this issue, we have developed robust algorithms based on hidden Markov models (HMMs) of motor proteins. The basic algorithm, called variable-stepsize HMM (VS-HMM), was introduced in the previous article. It improves on currently available Markov-model based techniques by allowing for arbitrary distributions of step sizes, and shows excellent convergence properties for the characterization of staircase motor timecourses in the presence of large measurement noise. In this article, we extend the VS-HMM framework for better performance with experimental data. The extended algorithm, variable-stepsize integrating-detector HMM (VSI-HMM) better models the data-acquisition process, and accounts for random baseline drifts. Further, as an extension, maximum a posteriori estimation is provided. When used as a blind step detector, the VSI-HMM outperforms conventional step detectors. The fidelity of the VSI-HMM is tested with simulations and is applied to in vitro myosin V data where a small 10 nm population of steps is identified. It is also applied to an in vivo recording of melanosome motion, where strong evidence is found for repeated, bidirectional steps smaller than 8 nm in size, implying that multiple motors simultaneously carry the cargo.
Monaco, James P.; Tomaszewski, John E.; Feldman, Michael D.; Hagemann, Ian; Moradi, Mehdi; Mousavi, Parvin; Boag, Alexander; Davidson, Chris; Abolmaesumi, Purang; Madabhushi, Anant
2010-01-01
In this paper we present a high-throughput system for detecting regions of carcinoma of the prostate (CaP) in HSs from radical prostatectomies (RPs) using probabilistic pairwise Markov models (PPMMs), a novel type of Markov random field (MRF). At diagnostic resolution a digitized HS can contain 80K×70K pixels — far too many for current automated Gleason grading algorithms to process. However, grading can be separated into two distinct steps: 1) detecting cancerous regions and 2) then grading these regions. The detection step does not require diagnostic resolution and can be performed much more quickly. Thus, we introduce a CaP detection system capable of analyzing an entire digitized whole-mount HS (2×1.75 cm2) in under three minutes (on a desktop computer) while achieving a CaP detection sensitivity and specificity of 0.87 and 0.90, respectively. We obtain this high-throughput by tailoring the system to analyze the HSs at low resolution (8 µm per pixel). This motivates the following algorithm: Step 1) glands are segmented, Step 2) the segmented glands are classified as malignant or benign, and Step 3) the malignant glands are consolidated into continuous regions. The classification of individual glands leverages two features: gland size and the tendency for proximate glands to share the same class. The latter feature describes a spatial dependency which we model using a Markov prior. Typically, Markov priors are expressed as the product of potential functions. Unfortunately, potential functions are mathematical abstractions, and constructing priors through their selection becomes an ad hoc procedure, resulting in simplistic models such as the Potts. Addressing this problem, we introduce PPMMs which formulate priors in terms of probability density functions, allowing the creation of more sophisticated models. To demonstrate the efficacy of our CaP detection system and assess the advantages of using a PPMM prior instead of the Potts, we alternately incorporate
Control with a random access protocol and packet dropouts
NASA Astrophysics Data System (ADS)
Wang, Liyuan; Guo, Ge
2016-08-01
This paper investigates networked control systems whose actuators communicate with the controller via a limited number of unreliable channels. The access to the channels is decided by a so-called group random access protocol, which is modelled as a binary Markov sequence. Data packet dropouts in the channels are modelled as independent Bernoulli processes. For such systems, a systematic characterisation for controller synthesis is established and stated in terms of the transition probabilities of the Markov protocol and the packet dropout probabilities. The results are illustrated via a numerical example.
Time operator of Markov chains and mixing times. Applications to financial data
NASA Astrophysics Data System (ADS)
Gialampoukidis, I.; Gustafson, K.; Antoniou, I.
2014-12-01
We extend the notion of Time Operator from Kolmogorov Dynamical Systems and Bernoulli processes to Markov processes. The general methodology is presented and illustrated in the simple case of binary processes. We present a method to compute the eigenfunctions of the Time Operator. Internal Ages are related to other characteristic times of Markov chains, namely the Kemeny time, the convergence rate and Goodman’s intrinsic time. We clarified the concept of mixing time by providing analytic formulas for two-state Markov chains. Explicit formulas for mixing times are presented for any two-state regular Markov chain. The mixing time of a Markov chain is determined also by the Time Operator of the Markov chain, within its Age computation. We illustrate these results in terms of two realistic examples: A Markov chain from US GNP data and a Markov chain from Dow Jones closing prices. We propose moreover a representation for the Kemeny constant, in terms of internal Ages.
Markov bases and toric ideals for some contingency tables
NASA Astrophysics Data System (ADS)
Mohammed, N. F.; Rakhimov, I. S.; Shitan, M.
2016-06-01
The main objective of this work is to study Markov bases and toric ideals for p/(v -1 )(p -v ) 2 v ×v ×p/v - contingency tables that has fixed two-dimensional marginal when p is a multiple of v and greater than or equal to 2v. Moreover, the connected bipartite graph is also constructed by using elements of Markov basis. This work is an extension on results, that has been found by Hadi and Salman in 2014.
Markov chain Monte Carlo linkage analysis of complex quantitative phenotypes.
Hinrichs, A; Reich, T
2001-01-01
We report a Markov chain Monte Carlo analysis of the five simulated quantitative traits in Genetic Analysis Workshop 12 using the Loki software. Our objectives were to determine the efficacy of the Markov chain Monte Carlo method and to test a new scoring technique. Our initial blind analysis, on replicate 42 (the "best replicate") successfully detected four out of the five disease loci and found no false positives. A power analysis shows that the software could usually detect 4 of the 10 trait/gene combinations at an empirical point-wise p-value of 1.5 x 10(-4).
a Markov-Process Inspired CA Model of Highway Traffic
NASA Astrophysics Data System (ADS)
Wang, Fa; Li, Li; Hu, Jian-Ming; Ji, Yan; Ma, Rui; Jiang, Rui
To provide a more accurate description of the driving behaviors especially in car-following, namely a Markov-Gap cellular automata model is proposed in this paper. It views the variation of the gap between two consequent vehicles as a Markov process whose stationary distribution corresponds to the observed gap distribution. This new model provides a microscopic simulation explanation for the governing interaction forces (potentials) between the queuing vehicles, which cannot be directly measurable for traffic flow applications. The agreement between empirical observations and simulation results suggests the soundness of this new approach.
A semi-Markov model with memory for price changes
NASA Astrophysics Data System (ADS)
D'Amico, Guglielmo; Petroni, Filippo
2011-12-01
We study the high-frequency price dynamics of traded stocks by means of a model of returns using a semi-Markov approach. More precisely we assume that the intraday returns are described by a discrete time homogeneous semi-Markov model which depends also on a memory index. The index is introduced to take into account periods of high and low volatility in the market. First of all we derive the equations governing the process and then theoretical results are compared with empirical findings from real data. In particular we analyzed high-frequency data from the Italian stock market from 1 January 2007 until the end of December 2010.
Application of Hidden Markov Models in Biomolecular Simulations.
Shukla, Saurabh; Shamsi, Zahra; Moffett, Alexander S; Selvam, Balaji; Shukla, Diwakar
2017-01-01
Hidden Markov models (HMMs) provide a framework to analyze large trajectories of biomolecular simulation datasets. HMMs decompose the conformational space of a biological molecule into finite number of states that interconvert among each other with certain rates. HMMs simplify long timescale trajectories for human comprehension, and allow comparison of simulations with experimental data. In this chapter, we provide an overview of building HMMs for analyzing bimolecular simulation datasets. We demonstrate the procedure for building a Hidden Markov model for Met-enkephalin peptide simulation dataset and compare the timescales of the process.
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T.; Pande, Vijay S.
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce a maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is dramatically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations. PMID:26203016
Quantum hidden Markov models based on transition operation matrices
NASA Astrophysics Data System (ADS)
Cholewa, Michał; Gawron, Piotr; Głomb, Przemysław; Kurzyk, Dariusz
2017-04-01
In this work, we extend the idea of quantum Markov chains (Gudder in J Math Phys 49(7):072105 [3]) in order to propose quantum hidden Markov models (QHMMs). For that, we use the notions of transition operation matrices and vector states, which are an extension of classical stochastic matrices and probability distributions. Our main result is the Mealy QHMM formulation and proofs of algorithms needed for application of this model: Forward for general case and Vitterbi for a restricted class of QHMMs. We show the relations of the proposed model to other quantum HMM propositions and present an example of application.
A new derivation of the randomness parameter
NASA Astrophysics Data System (ADS)
Wang, Hongyun
2007-10-01
For a stochastic stepper that can only step forward, there are two randomnesses: (1) the randomness in the cycle time and (2) the randomness in the number of steps (cycles) over long time. The equivalence between these two randomnesses was previously established using the approach of Laplace transform [M. J. Schnitzer and S. M. Block, "Statistical kinetics of processive enzymes," Cold Spring Harbor Symp. Quant. Biol. 60, 793 (1995)]. In this study, we first discuss the problems of this approach when the cycle time distribution has a discrete component, and then present a new derivation based on the framework of semi-Markov processes with age structure. We also show that the equivalence between the two randomnesses depends on the existence of the first moment of the waiting time for completing the first cycle, which is strongly affected by the initial age distribution. Therefore, any derivation that concludes the equivalence categorically regardless of the initial age distribution is mathematically questionable.
Hatton, Matthew; Nankivell, Matthew; Lyn, Ethan; Falk, Stephen; Pugh, Cheryl; Navani, Neal; Stephens, Richard; Parmar, Mahesh
2011-11-01
Purpose: Recent clinical trials and meta-analyses have shown that both CHART (continuous hyperfractionated accelerated radiation therapy) and induction chemotherapy offer a survival advantage over conventional radical radiotherapy for patients with inoperable non-small cell-lung cancer (NSCLC). This multicenter randomized controlled trial (INCH) was set up to assess the value of giving induction chemotherapy before CHART. Methods and Materials: Patients with histologically confirmed, inoperable, Stage I-III NSCLC were randomized to induction chemotherapy (ICT) (three cycles of cisplatin-based chemotherapy followed by CHART) or CHART alone. Results: Forty-six patients were randomized (23 in each treatment arm) from 9 UK centers. As a result of poor accrual, the trial was closed in December 2007. Twenty-eight patients were male, 28 had squamous cell histology, 34 were Stage IIIA or IIIB, and all baseline characteristics were well balanced between the two treatment arms. Seventeen (74%) of the 23 ICT patients completed the three cycles of chemotherapy. All 42 (22 CHART + 20 ICT) patients who received CHART completed the prescribed treatment. Median survival was 17 months in the CHART arm and 25 months in the ICT arm (hazard ratio of 0.60 [95% CI 0.31-1.16], p = 0.127). Grade 3 or 4 adverse events (mainly fatigue, dysphagia, breathlessness, and anorexia) were reported for 13 (57%) CHART and 13 (65%) ICT patients. Conclusions: This small randomized trial indicates that ICT followed by CHART is feasible and well tolerated. Despite closing early because of poor accrual, and so failing to show clear evidence of a survival benefit for the additional chemotherapy, the results suggest that CHART, and ICT before CHART, remain important options for the treatment of inoperable NSCLC and deserve further study.
NASA Astrophysics Data System (ADS)
Zhu, Yanzheng; Zhang, Lixian; Sreeram, Victor; Shammakh, Wafa; Ahmad, Bashir
2016-10-01
In this paper, the resilient model approximation problem for a class of discrete-time Markov jump time-delay systems with input sector-bounded nonlinearities is investigated. A linearised reduced-order model is determined with mode changes subject to domination by a hierarchical Markov chain containing two different nonhomogeneous Markov chains. Hence, the reduced-order model obtained not only reflects the dependence of the original systems but also model external influence that is related to the mode changes of the original system. Sufficient conditions formulated in terms of bilinear matrix inequalities for the existence of such models are established, such that the resulting error system is stochastically stable and has a guaranteed l2-l∞ error performance. A linear matrix inequalities optimisation coupled with line search is exploited to solve for the corresponding reduced-order systems. The potential and effectiveness of the developed theoretical results are demonstrated via a numerical example.
Wilt, Timothy J
2012-12-01
Prostate cancer is the most common noncutaneous malignancy and the second leading cause of cancer death in men. In the United States, 90% of men with prostate cancer are more than age 60 years, diagnosed by early detection with the prostate-specific antigen (PSA) blood test, and have disease believed confined to the prostate gland (clinically localized). Common treatments for clinically localized prostate cancer include watchful waiting (WW), surgery to remove the prostate gland (radical prostatectomy), external-beam radiation therapy and interstitial radiation therapy (brachytherapy), and androgen deprivation. Little is known about the relative effectiveness and harms of treatments because of the paucity of randomized controlled trials. The Department of Veterans Affairs/National Cancer Institute/Agency for Healthcare Research and Quality Cooperative Studies Program Study #407:Prostate Cancer Intervention Versus Observation Trial (PIVOT), initiated in 1994, is a multicenter randomized controlled trial comparing radical prostatectomy with WW in men with clinically localized prostate cancer. We describe the study rationale, design, recruitment methods, and baseline characteristics of PIVOT enrollees. We provide comparisons with eligible men declining enrollment and men participating in another recently reported randomized trial of radical prostatectomy vs WW conducted in Scandinavia. We screened 13 022 men with prostate cancer at 52 US medical centers for potential enrollment. From these, 5023 met initial age, comorbidity, and disease eligibility criteria, and a total of 731 men agreed to participate and were randomized. The mean age of enrollees was 67 years. Nearly one-third were African American. Approximately 85% reported that they were fully active. The median PSA was 7.8ng/mL (mean 10.2ng/mL). In three-fourths of men, the primary reason for biopsy leading to a diagnosis of prostate cancer was a PSA elevation or rise. Using previously developed tumor risk
Cohen, Steven J. . E-mail: S_Cohen@fccc.edu; Dobelbower, Ralph; Lipsitz, Stuart; Catalano, Paul J.; Sischy, Benjamin; Smith, Thomas J.; Haller, Daniel G.
2005-08-01
Purpose: The median survival time of patients with locally advanced adenocarcinoma of the pancreas is 8-10 months. Radiation therapy has been used to improve local control and palliate symptoms. This randomized study was undertaken to determine whether the addition of 5-fluorouracil (5-FU) and mitomycin-C (MMC) to radiation therapy improves outcome in this patient population. Patients and Methods: One hundred fourteen patients were randomized to receive 59.4 Gy external beam radiotherapy in 1.8 Gy fractions alone or in combination with 5-FU (1,000 mg/m{sup 2}/day for 4 days by continuous infusion Days 2-5 and 28-31) and MMC (10 mg/m{sup 2} on Day 2). Results: One hundred four patients were evaluable for efficacy. Hematologic and nonhematologic toxicities were more common in the combination arm. The response rates were 6% in the radiation therapy arm and 9% in the combination arm. There were no differences in median disease-free survival time (DFS) or overall survival time (OS) between the combination and radiation therapy alone arms: 5.1 vs. 5.0 months, respectively, for DFS (p = 0.19) and 8.4 vs. 7.1 months, respectively, for OS (p = 0.16). Conclusion: The addition of 5-FU and MMC to radiotherapy increased toxicity without improving DFS or OS in patients with locally advanced pancreatic cancer. Alternative drugs for radiosensitization may improve outcome.
Ideal-observer computation in medical imaging with use of Markov-chain Monte Carlo techniques.
Kupinski, Matthew A; Hoppin, John W; Clarkson, Eric; Barrett, Harrison H
2003-03-01
The ideal observer sets an upper limit on the performance of an observer on a detection or classification task. The performance of the ideal observer can be used to optimize hardware components of imaging systems and also to determine another observer's relative performance in comparison with the best possible observer. The ideal observer employs complete knowledge of the statistics of the imaging system, including the noise and object variability. Thus computing the ideal observer for images (large-dimensional vectors) is burdensome without severely restricting the randomness in the imaging system, e.g., assuming a flat object. We present a method for computing the ideal-observer test statistic and performance by using Markov-chain Monte Carlo techniques when we have a well-characterized imaging system, knowledge of the noise statistics, and a stochastic object model. We demonstrate the method by comparing three different parallel-hole collimator imaging systems in simulation.
Extended hidden Markov model for optimized segmentation of breast thermography images
NASA Astrophysics Data System (ADS)
Mahmoudzadeh, E.; Montazeri, M. A.; Zekri, M.; Sadri, S.
2015-09-01
Breast cancer is the most commonly diagnosed form of cancer in women. Thermography has been shown to provide an efficient screening modality for detecting breast cancer as it is able to detect small tumors and hence can lead to earlier diagnosis. This paper presents a novel extended hidden Markov model (EHMM), for optimized segmentation of breast thermogram for more effective image interpretation and easier analysis of Infrared (IR) thermal patterns. Competitive advantage of EHMM method refers to handling random sampling of the breast IR images with re-estimation of the model parameters. The performance of the algorithm is illustrated by applying EHMM segmentation method on the images of IUT_OPTIC database and compared with previously related methods. Simulation results indicate the remarkable capabilities of the proposed approach. It is worth noting that the presented algorithm is able to map semi hot regions into distinct areas and extract the regions of breast thermal images significantly, while the execution time is reduced.
Conditioned Limit Theorems for Some Null Recurrent Markov Processes
1976-08-01
this conlus ion is the lolloing Suppos I in Pt > t 0 for all (t - nd (iv) hold X10 I J -or each is arN > mtv ,x is an inureas tip function of St hen (v...Diffusion Processes and Their Sample Paths, Springer-Verlag, second printing, (1973). 39. Jacobsen , M., Splitting times for Markov processes and a
Robot reliability using fuzzy fault trees and Markov models
NASA Astrophysics Data System (ADS)
Leuschen, Martin; Walker, Ian D.; Cavallaro, Joseph R.
1996-10-01
Robot reliability has become an increasingly important issue in the last few years, in part due to the increased application of robots in hazardous and unstructured environments. However, much of this work leads to complex and nonintuitive analysis, which results in many techniques being impractical due to computational complexity or lack of appropriately complex models for the manipulator. In this paper, we consider the application of notions and techniques from fuzzy logic, fault trees, and Markov modeling to robot fault tolerance. Fuzzy logic lends itself to quantitative reliability calculations in robotics. The crisp failure rates which are usually used are not actually known, while fuzzy logic, due to its ability to work with the actual approximate (fuzzy) failure rates available during the design process, avoids making too many unwarranted assumptions. Fault trees are a standard reliability tool that can easily assimilate fuzzy logic. Markov modeling allows evaluation of multiple failure modes simultaneously, and is thus an appropriate method of modeling failures in redundant robotic systems. However, no method of applying fuzzy logic to Markov models was known to the authors. This opens up the possibility of new techniques for reliability using Markov modeling and fuzzy logic techniques, which are developed in this paper.
Modelling modal gating of ion channels with hierarchical Markov models
Fackrell, Mark; Crampin, Edmund J.; Taylor, Peter
2016-01-01
Many ion channels spontaneously switch between different levels of activity. Although this behaviour known as modal gating has been observed for a long time it is currently not well understood. Despite the fact that appropriately representing activity changes is essential for accurately capturing time course data from ion channels, systematic approaches for modelling modal gating are currently not available. In this paper, we develop a modular approach for building such a model in an iterative process. First, stochastic switching between modes and stochastic opening and closing within modes are represented in separate aggregated Markov models. Second, the continuous-time hierarchical Markov model, a new modelling framework proposed here, then enables us to combine these components so that in the integrated model both mode switching as well as the kinetics within modes are appropriately represented. A mathematical analysis reveals that the behaviour of the hierarchical Markov model naturally depends on the properties of its components. We also demonstrate how a hierarchical Markov model can be parametrized using experimental data and show that it provides a better representation than a previous model of the same dataset. Because evidence is increasing that modal gating reflects underlying molecular properties of the channel protein, it is likely that biophysical processes are better captured by our new approach than in earlier models. PMID:27616917
Operations and support cost modeling using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit
1989-01-01
Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.
Bayesian internal dosimetry calculations using Markov Chain Monte Carlo.
Miller, G; Martz, H F; Little, T T; Guilmette, R
2002-01-01
A new numerical method for solving the inverse problem of internal dosimetry is described. The new method uses Markov Chain Monte Carlo and the Metropolis algorithm. Multiple intake amounts, biokinetic types, and times of intake are determined from bioassay data by integrating over the Bayesian posterior distribution. The method appears definitive, but its application requires a large amount of computing time.
Exact goodness-of-fit tests for Markov chains.
Besag, J; Mondal, D
2013-06-01
Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps.
Indexed semi-Markov process for wind speed modeling.
NASA Astrophysics Data System (ADS)
Petroni, F.; D'Amico, G.; Prattico, F.
2012-04-01
The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [1] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [3], by using two models, first
Building Higher-Order Markov Chain Models with EXCEL
ERIC Educational Resources Information Center
Ching, Wai-Ki; Fung, Eric S.; Ng, Michael K.
2004-01-01
Categorical data sequences occur in many applications such as forecasting, data mining and bioinformatics. In this note, we present higher-order Markov chain models for modelling categorical data sequences with an efficient algorithm for solving the model parameters. The algorithm can be implemented easily in a Microsoft EXCEL worksheet. We give a…
Weighted Markov Chains and Graphic State Nodes for Information Retrieval.
ERIC Educational Resources Information Center
Benoit, G.
2002-01-01
Discusses users' search behavior and decision making in data mining and information retrieval. Describes iterative information seeking as a Markov process during which users advance through states of nodes; and explains how the information system records the decision as weights, allowing the incorporation of users' decisions into the Markov…
Exploring Mass Perception with Markov Chain Monte Carlo
ERIC Educational Resources Information Center
Cohen, Andrew L.; Ross, Michael G.
2009-01-01
Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…
Using Markov Chain Analyses in Counselor Education Research
ERIC Educational Resources Information Center
Duys, David K.; Headrick, Todd C.
2004-01-01
This study examined the efficacy of an infrequently used statistical analysis in counselor education research. A Markov chain analysis was used to examine hypothesized differences between students' use of counseling skills in an introductory course. Thirty graduate students participated in the study. Independent raters identified the microskills…
Multiensemble Markov models of molecular thermodynamics and kinetics.
Wu, Hao; Paul, Fabian; Wehmeyer, Christoph; Noé, Frank
2016-06-07
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models-clustering of high-dimensional spaces and modeling of complex many-state systems-with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein-ligand binding model.
Multiensemble Markov models of molecular thermodynamics and kinetics
Wu, Hao; Paul, Fabian; Noé, Frank
2016-01-01
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models—clustering of high-dimensional spaces and modeling of complex many-state systems—with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein–ligand binding model. PMID:27226302
Nonlinear Markov Semigroups and Interacting Lévy Type Processes
NASA Astrophysics Data System (ADS)
Kolokoltsov, Vassili N.
2007-02-01
Semigroups of positivity preserving linear operators on measures of a measurable space X describe the evolutions of probability distributions of Markov processes on X. Their dual semigroups of positivity preserving linear operators on the space of measurable bounded functions B( X) on X describe the evolutions of averages over the trajectories of these Markov processes. In this paper we introduce and study the general class of semigroups of non-linear positivity preserving transformations on measures that is non-linear Markov or Feller semigroups. An explicit structure of generators of such groups is given in case when X is the Euclidean space R d (or more generally, a manifold) showing how these semigroups arise from the general kinetic equations of statistical mechanics and evolutionary biology that describe the dynamic law of large numbers for Markov models of interacting particles. Well posedness results for these equations are given together with applications to interacting particles: dynamic law of large numbers and central limit theorem, the latter being new already for the standard coagulation-fragmentation models.
Students' Progress throughout Examination Process as a Markov Chain
ERIC Educational Resources Information Center
Hlavatý, Robert; Dömeová, Ludmila
2014-01-01
The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…
Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng
2012-01-01
This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.
Detecting structure of haplotypes and local ancestry
Technology Transfer Automated Retrieval System (TEKTRAN)
We present a two-layer hidden Markov model to detect the structure of haplotypes for unrelated individuals. This allows us to model two scales of linkage disequilibrium (one within a group of haplotypes and one between groups), thereby taking advantage of rich haplotype information to infer local an...
A Markov chain analysis of fish movements to determine entrainment zones
Johnson, Gary E.; Hedgepeth, J.; Skalski, John R.; Giorgi, Albert E.
2004-06-01
The extent of the biological zone of influence (BZI) of a water withdrawal port, such as a cooling water intake or a smolt bypass, directly reflects its local effect on fish. This study produced a new technique to determine the BZI, defined as the region immediately upstream of a portal where the probability of fish movement toward the portal is greater than 90%. We developed and applied the technique at The Dalles Dam on the Columbia River, where the ice/trash sluiceway functions as a surface flow smolt bypass. To map the BZI, we applied a Markov-Chain analysis to smolt movement data collected with an active fish tracking sonar system. Probabilities of fish movement from cell to cell in the sample volume, calculated from tracked fish data, formed a Markov transition matrix. Multiplying this matrix by itself many times with absorption at the boundaries produced estimates of probability of passage out each side of the sample volume from the cells within. The BZI of a sluiceway entrance at The Dalles Dam was approximately 5 m across and extended 6-8 m out from the face of the dam in the surface layer 2-3 m deep. BZI mapping is applicable to many bioengineering efforts to protect fish populations.
Xie, Jun; Kim, Nak-Kyeong
2005-09-01
Statistical methods have been developed for finding local patterns, also called motifs, in multiple protein sequences. The aligned segments may imply functional or structural core regions. However, the existing methods often have difficulties in aligning multiple proteins when sequence residue identities are low (e.g., less than 25%). In this article, we develop a Bayesian model and Markov chain Monte Carlo (MCMC) methods for identifying subtle motifs in protein sequences. Specifically, a motif is defined not only in terms of specific sites characterized by amino acid frequency vectors, but also as a combination of secondary characteristics such as hydrophobicity, polarity, etc. Markov chain Monte Carlo methods are proposed to search for a motif pattern with high posterior probability under the new model. A special MCMC algorithm is developed, involving transitions between state spaces of different dimensions. The proposed methods were supported by a simulated study. It was then tested by two real datasets, including a group of helix-turn-helix proteins, and one set from the CATH Protein Structure Classification Database. Statistical comparisons showed that the new approach worked better than a typical Gibbs sampling approach which is based only on an amino acid model.
NASA Astrophysics Data System (ADS)
Vaglica, Gabriella; Lillo, Fabrizio; Mantegna, Rosario N.
2010-07-01
Large trades in a financial market are usually split into smaller parts and traded incrementally over extended periods of time. We address these large trades as hidden orders. In order to identify and characterize hidden orders, we fit hidden Markov models to the time series of the sign of the tick-by-tick inventory variation of market members of the Spanish Stock Exchange. Our methodology probabilistically detects trading sequences, which are characterized by a significant majority of buy or sell transactions. We interpret these patches of sequential buying or selling transactions as proxies of the traded hidden orders. We find that the time, volume and number of transaction size distributions of these patches are fat tailed. Long patches are characterized by a large fraction of market orders and a low participation rate, while short patches have a large fraction of limit orders and a high participation rate. We observe the existence of a buy-sell asymmetry in the number, average length, average fraction of market orders and average participation rate of the detected patches. The detected asymmetry is clearly dependent on the local market trend. We also compare the hidden Markov model patches with those obtained with the segmentation method used in Vaglica et al (2008 Phys. Rev. E 77 036110), and we conclude that the former ones can be interpreted as a partition of the latter ones.
Stochastically gated local and occupation times of a Brownian particle
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.
2017-01-01
We generalize the Feynman-Kac formula to analyze the local and occupation times of a Brownian particle moving in a stochastically gated one-dimensional domain. (i) The gated local time is defined as the amount of time spent by the particle in the neighborhood of a point in space where there is some target that only receives resources from (or detects) the particle when the gate is open; the target does not interfere with the motion of the Brownian particle. (ii) The gated occupation time is defined as the amount of time spent by the particle in the positive half of the real line, given that it can only cross the origin when a gate placed at the origin is open; in the closed state the particle is reflected. In both scenarios, the gate randomly switches between the open and closed states according to a two-state Markov process. We derive a stochastic, backward Fokker-Planck equation (FPE) for the moment-generating function of the two types of gated Brownian functional, given a particular realization of the stochastic gate, and analyze the resulting stochastic FPE using a moments method recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment-generating function, averaged with respect to realizations of the stochastic gate.
Knegtering, B; Brombacher, A C
2000-01-01
This paper presents a method that will drastically reduce the calculation effort required to obtain quantitative safety and reliability assessments to determine safety integrity levels for applications in the process industry. The method described combines all benefits of Markov modeling with the practical benefits of reliability block diagrams.
An overview of Markov chain methods for the study of stage-sequential developmental processes.
Kapland, David
2008-03-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model. A special case of the mixture latent Markov model, the so-called mover-stayer model, is used in this study. Unconditional and conditional models are estimated for the manifest Markov model and the latent Markov model, where the conditional models include a measure of poverty status. Issues of model specification, estimation, and testing using the Mplus software environment are briefly discussed, and the Mplus input syntax is provided. The author applies these 4 methods to a single example of stage-sequential development in reading competency in the early school years, using data from the Early Childhood Longitudinal Study--Kindergarten Cohort.
Randomizing Genome-Scale Metabolic Networks
Samal, Areejit; Martin, Olivier C.
2011-01-01
Networks coming from protein-protein interactions, transcriptional regulation, signaling, or metabolism may appear to have “unusual” properties. To quantify this, it is appropriate to randomize the network and test the hypothesis that the network is not statistically different from expected in a motivated ensemble. However, when dealing with metabolic networks, the randomization of the network using edge exchange generates fictitious reactions that are biochemically meaningless. Here we provide several natural ensembles of randomized metabolic networks. A first constraint is to use valid biochemical reactions. Further constraints correspond to imposing appropriate functional constraints. We explain how to perform these randomizations with the help of Markov Chain Monte Carlo (MCMC) and show that they allow one to approach the properties of biological metabolic networks. The implication of the present work is that the observed global structural properties of real metabolic networks are likely to be the consequence of simple biochemical and functional constraints. PMID:21779409
Bayesian clinical trial design using Markov models with applications to autoimmune disease.
Eggleston, Barry S; Ibrahim, Joseph G; Catellier, Diane
2017-02-08
Immune Thrombocytopenia is an autoimmune disease associated with bleeding that is treated by increasing the platelet count to a level where the chance of uncontrollable bleeding is low. Failure occurs when platelet counts are not raised sufficiently (initial failure), or when high platelet counts are not maintained after initial success (relapse). In this paper, we propose a Bayesian clinical trial design that uses a Markov multistate model along with a power prior for the parameters which incorporates historical control data to estimate transition rates among two randomized groups as defined by the model. A detailed simulation is carried out to examine the operating characteristics of a trial to test whether a new treatment reduces the relapse rate by 40% relative to standard care when data from 60 historical controls treated with standard care is available. We also use simulated data to demonstrate effects of discordance between historical and randomized controls on the estimated hazard ratios. Finally, we use a simulated trial to demonstrate briefly what type of results the model can give and how those results can be used to address hypotheses regarding treatment effects. Using simulated data, we show that the model yields good operating characteristics when the historical and randomized controls are from the same population, and demonstrate how discordance between the control groups affects the operating characteristics.
Markov Chain Monte Carlo from Lagrangian Dynamics
Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark
2014-01-01
Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper. PMID:26240515
Wang, Minchuan; Bond, Nicholas J.; Letcher, Andrew J.; Richardson, Jonathan P.; Lilley, Kathryn S.; Irvine, Robin F.; Clarke, Jonathan H.
2010-01-01
PtdIns5P 4-kinases IIα and IIβ are cytosolic and nuclear respectively when transfected into cells, including DT40 cells [Richardson, Wang, Clarke, Patel and Irvine (2007) Cell. Signalling 19, 1309–1314]. In the present study we have genomically tagged both type II PtdIns5P 4-kinase isoforms in DT40 cells. Immunoprecipitation of either isoform from tagged cells, followed by MS, revealed that they are associated directly with each other, probably by heterodimerization. We quantified the cellular levels of the type II PtdIns5P 4-kinase mRNAs by real-time quantitative PCR and the absolute amount of each isoform in immunoprecipitates by MS using selective reaction monitoring with 14N,13C-labelled internal standard peptides. The results suggest that the dimerization is complete and random, governed solely by the relative concentrations of the two isoforms. Whereas PtdIns5P 4-kinase IIβ is >95% nuclear, as expected, the distribution of PtdIns4P 4-kinase IIα is 60% cytoplasmic (all bound to membranes) and 40% nuclear. In vitro, PtdIns5P 4-kinase IIα was 2000-fold more active as a PtdIns5P 4-kinase than the IIβ isoform. Overall the results suggest a function of PtdIns5P 4-kinase IIβ may be to target the more active IIα isoform into the nucleus. PMID:20569199
NASA Astrophysics Data System (ADS)
Huerta-Lopez, C. I.; Upegui Botero, F. M.; Pulliam, J.; Willemann, R. J.; Pasyanos, M.; Schmitz, M.; Rojas Mercedes, N.; Louie, J. N.; Moschetti, M. P.; Martinez-Cruzado, J. A.; Suárez, L.; Huerfano Moreno, V.; Polanco, E.
2013-12-01
Site characterization in civil engineering demands to know at least two of the dynamic properties of soil systems, which are: (i) dominant vibration frequency, and (ii) damping. As part of an effort to develop understanding of the principles of earthquake hazard analysis, particularly site characterization techniques using non invasive/non destructive seismic methods, a workshop (Pan-American Advanced Studies Institute: New Frontiers in Geophysical Research: Bringing New Tools and Techniques to Bear on Earthquake Hazard Analysis and Mitigation) was conducted during july 15-25, 2013 in Santo Domingo, Dominican Republic by the alliance of Pan-American Advanced Studies Institute (PASI) and Incorporated Research Institutions for Seismology (IRIS), jointly supported by Department of Energy (DOE) and National Science Foundation (NSF). Preliminary results of the site characterization in terms of fundamental vibration frequency and damping are here presented from data collected during the workshop. Three different methods were used in such estimations and later compared in order to identify the stability of estimations as well as the advantage or disadvantage among these methodologies. The used methods were the: (i) Random Decrement Method (RDM), to estimate fundamental vibration frequency and damping simultaneously; (ii) Empirical Mode Decomposition (EMD), to estimate the vibration modes, and (iii) Horizontal to Vertical Spectra ratio (HVSR), to estimate the fundamental vibration frequency. In all cases ambient vibration and induced vibration were used.
2011-01-01
Background To evaluate whether weekly schedules of docetaxel-based chemotherapy were superior to 3-weekly ones in terms of quality of life in locally advanced or metastatic breast cancer. Methods Patients with locally advanced or metastatic breast cancer, aged ≤ 70 years, performance status 0-2, chemotherapy-naive for metastatic disease, were eligible. They were randomized to weekly or 3-weekly combination of docetaxel and epirubicin, if they were not treated with adjuvant anthracyclines, or docetaxel and capecitabine, if treated with adjuvant anthracyclines. Primary end-point was global quality of life change at 6-weeks, measured by EORTC QLQ-C30. With two-sided alpha 0.05 and 80% power for 35% effect size, 130 patients per arm were needed. Results From February 2004 to March 2008, 139 patients were randomized, 70 to weekly and 69 to 3-weekly arm; 129 and 89 patients filled baseline and 6-week questionnaires, respectively. Global quality of life was better in the 3-weekly arm (p = 0.03); patients treated with weekly schedules presented a significantly worsening in role functioning and financial scores (p = 0.02 and p < 0.001). Neutropenia and stomatitis were worse in the 3-weekly arm, where two toxic deaths were observed. Overall response rate was 39.1% and 33.3% in 3-weekly and weekly arms; hazard ratio of progression was 1.29 (95% CI: 0.84-1.97) and hazard ratio of death was 1.38 (95% CI: 0.82-2.30) in the weekly arm. Conclusions In this trial, the weekly schedules of docetaxel-based chemotherapy appear to be inferior to the 3-weekly one in terms of quality of life in patients with locally advanced or metastatic breast cancer. Trial registration ClinicalTrials.gov NCT00540800. PMID:21324184
Xu, Xinglu; Ye, Xin; Liu, Gang; Zhang, Tingping
2015-09-01
Concurrent chemoradiotherapy is the standard treatment for patients with locally advanced lung cancer. The most common dose-limiting adverse effect of thoracic radiotherapy (RT) is radiation pneumonia (RP). A randomized comparison study was designed to investigate targeted percutaneous microwave ablation at pulmonary lesion combined with mediastinal RT with or without chemotherapy (ablation group) in comparison with RT (target volume includes pulmonary tumor and mediastinal node) with or without chemotherapy (RT group) for the treatment of locally advanced non-small cell lung cancers (NSCLCs). From 2009 to 2012, patients with stage IIIA or IIIB NSCLCs who refused to undergo surgery or were not suitable for surgery were enrolled. Patients were randomly assigned to the RT group (n = 47) or ablation group (n = 51). Primary outcomes were the incidence of RP and curative effectiveness (complete response, partial response, and stable disease); secondary outcome was the 2-year overall survival (OS). Fifteen patients (31.9%) in the RT and two (3.9%) in the ablation group experienced RP (P < 0.001). The ratio of effective cases was 85.1 versus 80.4% for mediastinal lymph node (P = 0.843) and 83.0 versus 100% for pulmonary tumors (P = 0.503), respectively, for the RT and ablation groups. Kaplan-Meier analysis demonstrated 2-year OS rate of NSCLC patients in ablation group was higher than RT group, but no statistical difference (log-rank test, P = 0.297). Percutaneous microwave ablation followed by RT for inoperable stage III NSCLCs may result in a lower rate of RP and better local control than radical RT treatments.
Castro Dos Santos, Nídia Cristina; Andere, Naira Maria Rebelatto Bechara; Araujo, Cássia Fernandes; de Marco, Andrea Carvalho; Dos Santos, Lúcio Murilo; Jardini, Maria Aparecida Neves; Santamaria, Mauro Pedrine
2016-11-01
Diabetes has become a global epidemic. Its complications can have a significant impact on quality of life, longevity, and public health costs. The presence of diabetes might impair the prognosis of periodontal treatments due to its negative influence on wound healing. Antimicrobial photodynamic therapy (aPDT) is a local approach that can promote bacterial decontamination in periodontal pockets. The aim of this study was to investigate the local effect of adjunct aPDT to ultrasonic periodontal debridement (UPD) and compare it to UD only for the treatment of chronic periodontitis in type 2 diabetic patients. Twenty type 2 diabetic patients with moderate to severe generalized chronic periodontitis were selected. Two periodontal pockets with probing depth (PD) and clinical attachment level (CAL) ≥5 mm received UPD only (UPD group) or UPD plus adjunct aPDT (UPD + aPDT group). Periodontal clinical measures were collected and compared at baseline and 30, 90, and 180 days. After 180 days of follow-up, there were statistically significant reductions in PD from 5.75 ± 0.91 to 3.47 ± 0.97 mm in the UPD group and from 6.15 ± 1.27 to 3.71 ± 1.63 mm in the UPD + aPDT group. However, intergroup analysis did not reveal statistically significant differences in any of the evaluated clinical parameters (p > 0.05). The adjunct application of aPDT to UPD did not present additional benefits for the treatment of chronic periodontitis in type 2 diabetic patients. The ClinicalTrials.gov identifier of the present study is NCT02627534.
Hohenauer, Erich; Cescon, Corrado; Deliens, Tom; Clarys, Peter; Clijsen, Ron
2017-04-01
The central- and peripheral mechanisms by which heat strain limits physical performance are not fully elucidated. Nevertheless, pre-cooling is often used in an attempt to improve subsequent performance. This study compared the effects of pre-cooling vs. a pre-thermoneutral application on central- and peripheral fatigue during 60% of isometric maximum voluntary contraction (MVC) of the right quadriceps femoris muscle. Furthermore, the effects between a pre-cooling and a pre-thermoneutral application on isometric MVC of the right quadriceps femoris muscle and subjective ratings of perceived exertion (RPE) were investigated. In this randomized controlled trial, 18 healthy adults voluntarily participated. The participants received either a cold (experimental) application (+8°C) or a thermoneutral (control) application (+32°C) for 20min on their right thigh (one cuff). After the application, central (fractal dimension - FD) and peripheral (muscle fiber conduction velocity - CV) fatigue was estimated using sEMG parameters during 60% of isometric MVC. Surface EMG signals were detected from the vastus medialis and lateralis using bidimensional arrays. Immediately after the submaximal contraction, isometric MVC and RPE were assessed. Participants receiving the cold application were able to maintain a 60% isometric MVC significantly longer when compared to the thermoneutral group (mean time: 78 vs. 46s; p=0.04). The thermoneutral application had no significant impact on central fatigue (p>0.05) compared to the cold application (p=0.03). However, signs of peripheral fatigue were significantly higher in the cold group compared to the thermoneutral group (p=0.008). Pre-cooling had no effect on isometric MVC of the right quadriceps muscle and ratings of perceived exertion. Pre-cooling attenuated central fatigue and led to significantly longer submaximal contraction times compared to the pre-thermoneutral application. These findings support the use of pre-cooling procedures
Lee, Anne W.M. . E-mail: awmlee@ha.org.hk; Tung, Stewart Y.; Chan, Anthony T.C.; Chappell, Rick; Fu, Y.-T.; Lu, Tai-Xiang; Tan, Terence; Chua, Daniel T.T.; O'Sullivan, Brian; Xu, Shirley L.; Pang, Ellie S.Y.; Sze, W.-M.; Leung, T.-W.; Kwan, W.-H.; Chan, Paddy; Liu, X.-F.; Tan, E.-H.; Sham, Jonathan; Siu, Lillian; Lau, W.-H.
2006-09-01
Purpose: To compare the benefit achieved by concurrent chemoradiotherapy (CRT) and/or accelerated fractionation (AF) vs. radiotherapy (RT) alone with conventional fractionation (CF) for patients with T3-4N0-1M0 nasopharyngeal carcinoma (NPC). Methods and Materials: All patients were irradiated with the same RT technique to {>=}66 Gy at 2 Gy per fraction, conventional five fractions/week in the CF and CF+C (chemotherapy) arms, and accelerated six fractions/week in the AF and AF+C arms. The CF+C and AF+C patients were given the Intergroup 0099 regimen (concurrent cisplatin plus adjuvant cisplatin and 5-fluorouracil). Results: Between 1999 and April 2004, 189 patients were randomly assigned; the trial was terminated early because of slow accrual. The median follow-up was 2.9 years. When compared with the CF arm, significant improvement in failure-free survival (FFS) was achieved by the AF+C arm (94% vs. 70% at 3 years, p = 0.008), but both the AF arm and the CF+C arm were insignificant (p {>=} 0.38). Multivariate analyses showed that CRT was a significant factor: hazard ratio (HR) = 0.52 (0.28-0.97), AF per se was insignificant: HR = 0.68 (0.37-1.25); the interaction of CRT by AF was strongly significant (p = 0.006). Both CRT arms had significant increase in acute toxicities (p < 0.005), and the AF+C arm also incurred borderline increase in late toxicities (34% vs. 14% at 3 years, p = 0.05). Conclusions: Preliminary results suggest that concurrent chemoradiotherapy with accelerated fractionation could significantly improve tumor control when compared with conventional RT alone; further confirmation of therapeutic ratio is warranted.
Markov reliability models for digital flight control systems
NASA Technical Reports Server (NTRS)
Mcgough, John; Reibman, Andrew; Trivedi, Kishor
1989-01-01
The reliability of digital flight control systems can often be accurately predicted using Markov chain models. The cost of numerical solution depends on a model's size and stiffness. Acyclic Markov models, a useful special case, are particularly amenable to efficient numerical solution. Even in the general case, instantaneous coverage approximation allows the reduction of some cyclic models to more readily solvable acyclic models. After considering the solution of single-phase models, the discussion is extended to phased-mission models. Phased-mission reliability models are classified based on the state restoration behavior that occurs between mission phases. As an economical approach for the solution of such models, the mean failure rate solution method is introduced. A numerical example is used to show the influence of fault-model parameters and interphase behavior on system unreliability.
State reduction for semi-Markov reliability models
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1990-01-01
Trimming, a method of reducing the number of states in a semi-Markov reliability model, is described, and an error bound is derived. The error bound uses only three parameters from the semi-Markov model: (1) the maximum sum of rates for failure transitions leaving any state, (2) the maximum average holding time for a recovery-mode state, (3) and the operating time for the system. The error bound can be computed before any model generation takes places, which means the modeler can decide immediately whether the model can be trimmed. The trimming has a precise and simple description and thus can be easily included in a program that generates reliability models. The simplest version of the error bound for trimming is presented. More accurate versions can be obtained by requesting more information about the system being modeled.
Statistical significance test for transition matrices of atmospheric Markov chains
NASA Technical Reports Server (NTRS)
Vautard, Robert; Mo, Kingtse C.; Ghil, Michael
1990-01-01
Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.
Recursive utility in a Markov environment with stochastic growth
Hansen, Lars Peter; Scheinkman, José A.
2012-01-01
Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428
Sentiment classification technology based on Markov logic networks
NASA Astrophysics Data System (ADS)
He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe
2016-07-01
With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.
EMMA: A Software Package for Markov Model Building and Analysis.
Senne, Martin; Trendelkamp-Schroer, Benjamin; Mey, Antonia S J S; Schütte, Christof; Noé, Frank
2012-07-10
The study of folding and conformational changes of macromolecules by molecular dynamics simulations often requires the generation of large amounts of simulation data that are difficult to analyze. Markov (state) models (MSMs) address this challenge by providing a systematic way to decompose the state space of the molecular system into substates and to estimate a transition matrix containing the transition probabilities between these substates. This transition matrix can be analyzed to reveal the metastable, i.e., long-living, states of the system, its slowest relaxation time scales, and transition pathways and rates, e.g., from unfolded to folded, or from dissociated to bound states. Markov models can also be used to calculate spectroscopic data and thus serve as a way to reconcile experimental and simulation data. To reduce the technical burden of constructing, validating, and analyzing such MSMs, we provide the software framework EMMA that is freely available at https://simtk.org/home/emma .
Liouville equation and Markov chains: epistemological and ontological probabilities
NASA Astrophysics Data System (ADS)
Costantini, D.; Garibaldi, U.
2006-06-01
The greatest difficulty of a probabilistic approach to the foundations of Statistical Mechanics lies in the fact that for a system ruled by classical or quantum mechanics a basic description exists, whose evolution is deterministic. For such a system any kind of irreversibility is impossible in principle. The probability used in this approach is epistemological. On the contrary for irreducible aperiodic Markov chains the invariant measure is reached with probability one whatever the initial conditions. Almost surely the uniform distributions, on which the equilibrium treatment of quantum and classical perfect gases is based, are reached when time goes by. The transition probability for binary collision, deduced by the Ehrenfest-Brillouin model, points out an irreducible aperiodic Markov chain and thus an equilibrium distribution. This means that we are describing the temporal probabilistic evolution of the system. The probability involved in this evolution is ontological.
A Markov Chain Model for Changes in Users’ Assessment of Search Results
Zhitomirsky-Geffet, Maayan; Bar-Ilan, Judit; Levene, Mark
2016-01-01
Previous research shows that users tend to change their assessment of search results over time. This is a first study that investigates the factors and reasons for these changes, and describes a stochastic model of user behaviour that may explain these changes. In particular, we hypothesise that most of the changes are local, i.e. between results with similar or close relevance to the query, and thus belong to the same”coarse” relevance category. According to the theory of coarse beliefs and categorical thinking, humans tend to divide the range of values under consideration into coarse categories, and are thus able to distinguish only between cross-category values but not within them. To test this hypothesis we conducted five experiments with about 120 subjects divided into 3 groups. Each student in every group was asked to rank and assign relevance scores to the same set of search results over two or three rounds, with a period of three to nine weeks between each round. The subjects of the last three-round experiment were then exposed to the differences in their judgements and were asked to explain them. We make use of a Markov chain model to measure change in users’ judgments between the different rounds. The Markov chain demonstrates that the changes converge, and that a majority of the changes are local to a neighbouring relevance category. We found that most of the subjects were satisfied with their changes, and did not perceive them as mistakes but rather as a legitimate phenomenon, since they believe that time has influenced their relevance assessment. Both our quantitative analysis and user comments support the hypothesis of the existence of coarse relevance categories resulting from categorical thinking in the context of user evaluation of search results. PMID:27171426
On a Markov-Modulated Shock and Wear Process
2009-04-01
Markov-modulated shocks and wear. The transient results are derived from the (transform) solution of an integro - differential equation describing the...conditionally satisfies an integro - differential equation . This equation leads to the Laplace-Stieltjes transform (LST) of the lifetime distribution function and...derived from the (transform) solution of an integro -di?erential equation describing the joint distribution of the cumulative degradation process and the
The semi-Markov unreliability range evaluator program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1984-01-01
The SURE program is a design/validation tool for ultrareliable computer system architectures. The system uses simple algebraic formulas to compute accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. The mathematical formulas used in the program were derived from a mathematical theorem proven by Allan White under contract to NASA Langley Research Center. This mathematical theorem is discussed along with the user interface to the SURE program.
An abstract specification language for Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, R. W.
1985-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
An abstract language for specifying Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1986-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
Efficient Markov Network Structure Discovery Using Independence Tests
Bromberg, Facundo; Margaritis, Dimitris; Honavar, Vasant
2011-01-01
We present two algorithms for learning the structure of a Markov network from data: GSMN* and GSIMN. Both algorithms use statistical independence tests to infer the structure by successively constraining the set of structures consistent with the results of these tests. Until very recently, algorithms for structure learning were based on maximum likelihood estimation, which has been proved to be NP-hard for Markov networks due to the difficulty of estimating the parameters of the network, needed for the computation of the data likelihood. The independence-based approach does not require the computation of the likelihood, and thus both GSMN* and GSIMN can compute the structure efficiently (as shown in our experiments). GSMN* is an adaptation of the Grow-Shrink algorithm of Margaritis and Thrun for learning the structure of Bayesian networks. GSIMN extends GSMN* by additionally exploiting Pearl’s well-known properties of the conditional independence relation to infer novel independences from known ones, thus avoiding the performance of statistical tests to estimate them. To accomplish this efficiently GSIMN uses the Triangle theorem, also introduced in this work, which is a simplified version of the set of Markov axioms. Experimental comparisons on artificial and real-world data sets show GSIMN can yield significant savings with respect to GSMN*, while generating a Markov network with comparable or in some cases improved quality. We also compare GSIMN to a forward-chaining implementation, called GSIMN-FCH, that produces all possible conditional independences resulting from repeatedly applying Pearl’s theorems on the known conditional independence tests. The results of this comparison show that GSIMN, by the sole use of the Triangle theorem, is nearly optimal in terms of the set of independences tests that it infers. PMID:22822297
Visual Recognition of American Sign Language Using Hidden Markov Models.
1995-02-01
3.3 Previous Use of Hidden Markov Models in Gesture Recognition 19 3.4 Use of HMM’s for Recognizing Sign Language 20 4 Tracking and Modeling...Instead, computer systems may be employed to annotate certain features of sequences. A human gesture recognition system adds another dimension to...focus for many gesture recognition systems. Tracking the natural hand in real time using camera imagery is dif- ficult, but successful systems have
Value-Function Approximations for Partially Observable Markov Decision Processes
2000-08-01
The model of hoi e for problems similar to patient management is the partiallyobservable Markov de ision pro ess (POMDP) (Drake, 1962; Astrom , 1965...ient to work with belief statesthat assign probabilities to every possible pro ess state ( Astrom , 1965).2 In this ase theBellman equation redu es...approximate the value fun tion for a POMDP is to assumethat states of the pro ess are fully observable ( Astrom , 1965; Lovejoy, 1993). In that asethe
Markov Chain evaluation of acute postoperative pain transition states
Tighe, Patrick J.; Bzdega, Matthew; Fillingim, Roger B.; Rashidi, Parisa; Aytug, Haldun
2016-01-01
Prior investigations on acute postoperative pain dynamicity have focused on daily pain assessments, and so were unable to examine intra-day variations in acute pain intensity. We analyzed 476,108 postoperative acute pain intensity ratings clinically documented on postoperative days 1 to 7 from 8,346 surgical patients using Markov Chain modeling to describe how patients are likely to transition from one pain state to another in a probabilistic fashion. The Markov Chain was found to be irreducible and positive recurrent, with no absorbing states. Transition probabilities ranged from 0.0031 for the transition from state 10 to state 1, to 0.69 for the transition from state zero to state zero. The greatest density of transitions was noted in the diagonal region of the transition matrix, suggesting that patients were generally most likely to transition to the same pain state as their current state. There were also slightly increased probability densities in transitioning to a state of asleep or zero from the current state. Examination of the number of steps required to traverse from a particular first pain score to a target state suggested that overall, fewer steps were required to reach a state of zero (range 6.1–8.8 steps) or asleep (range 9.1–11) than were required to reach a mild pain intensity state. Our results suggest that Markov Chains are a feasible method for describing probabilistic postoperative pain trajectories, pointing toward the possibility of using Markov decision processes to model sequential interactions between pain intensity ratings and postoperative analgesic interventions. PMID:26588689
Probabilistic Independence Networks for Hidden Markov Probability Models
NASA Technical Reports Server (NTRS)
Smyth, Padhraic; Heckerman, Cavid; Jordan, Michael I
1996-01-01
In this paper we explore hidden Markov models(HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general enference algorithms for arbitrary PINs.
Symbolic Heuristic Search for Factored Markov Decision Processes
NASA Technical Reports Server (NTRS)
Morris, Robert (Technical Monitor); Feng, Zheng-Zhu; Hansen, Eric A.
2003-01-01
We describe a planning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. State abstraction is used to avoid evaluating states individually. Forward search from a start state, guided by an admissible heuristic, is used to avoid evaluating all states. We combine these two approaches in a novel way that exploits symbolic model-checking techniques and demonstrates their usefulness for decision-theoretic planning.
The sharp constant in Markov's inequality for the Laguerre weight
Sklyarov, Vyacheslav P
2009-06-30
We prove that the polynomial of degree n that deviates least from zero in the uniformly weighted metric with Laguerre weight is the extremal polynomial in Markov's inequality for the norm of the kth derivative. Moreover, the corresponding sharp constant does not exceed (8{sup k} n {exclamation_point} k {exclamation_point})/((n-k){exclamation_point} (2k){exclamation_point}). For the derivative of a fixed order this bound is asymptotically sharp as n{yields}{infinity}. Bibliography: 20 items.