Sample records for markov blanket-based method

  1. The Markov blankets of life: autonomy, active inference and the free energy principle

    PubMed Central

    Palacios, Ensor; Friston, Karl; Kiverstein, Julian

    2018-01-01

    This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens, in terms of the presence of Markov blankets under the active inference scheme—a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets—all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment. PMID:29343629

  2. Hierarchical Markov blankets and adaptive active inference. Comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Désormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Kirchhoff, Michael

    2018-03-01

    Ramstead MJD, Badcock PB, Friston KJ. Answering Schrödinger's question: A free-energy formulation. Phys Life Rev 2018. https://doi.org/10.1016/j.plrev.2017.09.001 [this issue] motivate a multiscale characterisation of living systems in terms of hierarchically structured Markov blankets - a view of living systems as comprised of Markov blankets of Markov blankets [1-4]. It is effectively a treatment of what life is and how it is realised, cast in terms of how Markov blankets of living systems self-organise via active inference - a corollary of the free energy principle [5-7].

  3. A novel Markov Blanket-based repeated-fishing strategy for capturing phenotype-related biomarkers in big omics data.

    PubMed

    Li, Hongkai; Yuan, Zhongshang; Ji, Jiadong; Xu, Jing; Zhang, Tao; Zhang, Xiaoshuai; Xue, Fuzhong

    2016-03-09

    We propose a novel Markov Blanket-based repeated-fishing strategy (MBRFS) in attempt to increase the power of existing Markov Blanket method (DASSO-MB) and maintain its advantages in omic data analysis. Both simulation and real data analysis were conducted to assess its performances by comparing with other methods including χ(2) test with Bonferroni and B-H adjustment, least absolute shrinkage and selection operator (LASSO) and DASSO-MB. A serious of simulation studies showed that the true discovery rate (TDR) of proposed MBRFS was always close to zero under null hypothesis (odds ratio = 1 for each SNPs) with excellent stability in all three scenarios of independent phenotype-related SNPs without linkage disequilibrium (LD) around them, correlated phenotype-related SNPs without LD around them, and phenotype-related SNPs with strong LD around them. As expected, under different odds ratio and minor allel frequency (MAFs), MBRFS always had the best performances in capturing the true phenotype-related biomarkers with higher matthews correlation coefficience (MCC) for all three scenarios above. More importantly, since proposed MBRFS using the repeated fishing strategy, it still captures more phenotype-related SNPs with minor effects when non-significant phenotype-related SNPs emerged under χ(2) test after Bonferroni multiple correction. The various real omics data analysis, including GWAS data, DNA methylation data, gene expression data and metabolites data, indicated that the proposed MBRFS always detected relatively reasonable biomarkers. Our proposed MBRFS can exactly capture the true phenotype-related biomarkers with the reduction of false negative rate when the phenotype-related biomarkers are independent or correlated, as well as the circumstance that phenotype-related biomarkers are associated with non-phenotype-related ones.

  4. Markov blanket-based approach for learning multi-dimensional Bayesian network classifiers: an application to predict the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39).

    PubMed

    Borchani, Hanen; Bielza, Concha; Martı Nez-Martı N, Pablo; Larrañaga, Pedro

    2012-12-01

    Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models recently proposed to deal with multi-dimensional classification problems, where each instance in the data set has to be assigned to more than one class variable. In this paper, we propose a Markov blanket-based approach for learning MBCs from data. Basically, it consists of determining the Markov blanket around each class variable using the HITON algorithm, then specifying the directionality over the MBC subgraphs. Our approach is applied to the prediction problem of the European Quality of Life-5 Dimensions (EQ-5D) from the 39-item Parkinson's Disease Questionnaire (PDQ-39) in order to estimate the health-related quality of life of Parkinson's patients. Fivefold cross-validation experiments were carried out on randomly generated synthetic data sets, Yeast data set, as well as on a real-world Parkinson's disease data set containing 488 patients. The experimental study, including comparison with additional Bayesian network-based approaches, back propagation for multi-label learning, multi-label k-nearest neighbor, multinomial logistic regression, ordinary least squares, and censored least absolute deviations, shows encouraging results in terms of predictive accuracy as well as the identification of dependence relationships among class and feature variables. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Learning Instance-Specific Predictive Models

    PubMed Central

    Visweswaran, Shyam; Cooper, Gregory F.

    2013-01-01

    This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325

  6. Lifting the Markov blankets of socio-cultural evolution. A comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Désormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Leydesdorff, Loet

    2018-03-01

    Ramstead et al. [8] claim an encompassing ontology which can be used as a heuristics for studying life, mind, and society both empirically and in terms of computer simulations. The systems levels are self-organizing into a hierarchy; "Markov blankets" close the various levels for one another. Homo sapiens sapiens is placed at the top of this hierarchy as "the world's most complex living systems." Humans are said to generate "(epi)genetically-specified expectations that have been shaped by selection to guide action-perception cycles toward adaptive or unsurprising states."

  7. Reverse Engineering of Modified Genes by Bayesian Network Analysis Defines Molecular Determinants Critical to the Development of Glioblastoma

    PubMed Central

    Kunkle, Brian W.; Yoo, Changwon; Roy, Deodutta

    2013-01-01

    In this study we have identified key genes that are critical in development of astrocytic tumors. Meta-analysis of microarray studies which compared normal tissue to astrocytoma revealed a set of 646 differentially expressed genes in the majority of astrocytoma. Reverse engineering of these 646 genes using Bayesian network analysis produced a gene network for each grade of astrocytoma (Grade I–IV), and ‘key genes’ within each grade were identified. Genes found to be most influential to development of the highest grade of astrocytoma, Glioblastoma multiforme were: COL4A1, EGFR, BTF3, MPP2, RAB31, CDK4, CD99, ANXA2, TOP2A, and SERBP1. All of these genes were up-regulated, except MPP2 (down regulated). These 10 genes were able to predict tumor status with 96–100% confidence when using logistic regression, cross validation, and the support vector machine analysis. Markov genes interact with NFkβ, ERK, MAPK, VEGF, growth hormone and collagen to produce a network whose top biological functions are cancer, neurological disease, and cellular movement. Three of the 10 genes - EGFR, COL4A1, and CDK4, in particular, seemed to be potential ‘hubs of activity’. Modified expression of these 10 Markov Blanket genes increases lifetime risk of developing glioblastoma compared to the normal population. The glioblastoma risk estimates were dramatically increased with joint effects of 4 or more than 4 Markov Blanket genes. Joint interaction effects of 4, 5, 6, 7, 8, 9 or 10 Markov Blanket genes produced 9, 13, 20.9, 26.7, 52.8, 53.2, 78.1 or 85.9%, respectively, increase in lifetime risk of developing glioblastoma compared to normal population. In summary, it appears that modified expression of several ‘key genes’ may be required for the development of glioblastoma. Further studies are needed to validate these ‘key genes’ as useful tools for early detection and novel therapeutic options for these tumors. PMID:23737970

  8. Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO)

    PubMed Central

    Yan, Lixin; Zhang, Yishi; He, Yi; Gao, Song; Zhu, Dunyao; Ran, Bin; Wu, Qing

    2016-01-01

    The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1) the Markov blanket (MB) algorithm is employed to extract the main factors associated with hazardous traffic events; (2) a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle’s speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G) have significant influences on hazardous traffic events. The sequential minimal optimization (SMO) algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles. PMID:27420073

  9. Hazardous Traffic Event Detection Using Markov Blanket and Sequential Minimal Optimization (MB-SMO).

    PubMed

    Yan, Lixin; Zhang, Yishi; He, Yi; Gao, Song; Zhu, Dunyao; Ran, Bin; Wu, Qing

    2016-07-13

    The ability to identify hazardous traffic events is already considered as one of the most effective solutions for reducing the occurrence of crashes. Only certain particular hazardous traffic events have been studied in previous studies, which were mainly based on dedicated video stream data and GPS data. The objective of this study is twofold: (1) the Markov blanket (MB) algorithm is employed to extract the main factors associated with hazardous traffic events; (2) a model is developed to identify hazardous traffic event using driving characteristics, vehicle trajectory, and vehicle position data. Twenty-two licensed drivers were recruited to carry out a natural driving experiment in Wuhan, China, and multi-sensor information data were collected for different types of traffic events. The results indicated that a vehicle's speed, the standard deviation of speed, the standard deviation of skin conductance, the standard deviation of brake pressure, turn signal, the acceleration of steering, the standard deviation of acceleration, and the acceleration in Z (G) have significant influences on hazardous traffic events. The sequential minimal optimization (SMO) algorithm was adopted to build the identification model, and the accuracy of prediction was higher than 86%. Moreover, compared with other detection algorithms, the MB-SMO algorithm was ranked best in terms of the prediction accuracy. The conclusions can provide reference evidence for the development of dangerous situation warning products and the design of intelligent vehicles.

  10. Casting a Wider Net: Data Driven Discovery of Proxies for Target Diagnoses

    PubMed Central

    Ramljak, Dusan; Davey, Adam; Uversky, Alexey; Roychoudhury, Shoumik; Obradovic, Zoran

    2015-01-01

    Background: The Hospital Readmissions Reduction Program (HRRP) introduced in October 2012 as part of the Affordable Care Act (ACA), ties hospital reimbursement rates to adjusted 30-day readmissions and mortality performance for a small set of target diagnoses. There is growing concern and emerging evidence that use of a small set of target diagnoses to establish reimbursement rates can lead to unstable results that are susceptible to manipulation (gaming) by hospitals. Methods: We propose a novel approach to identifying co-occurring diagnoses and procedures that can themselves serve as a proxy indicator of the target diagnosis. The proposed approach constructs a Markov Blanket that allows a high level of performance, in terms of predictive accuracy and scalability, along with interpretability of obtained results. In order to scale to a large number of co-occuring diagnoses (features) and hospital discharge records (samples), our approach begins with Google’s PageRank algorithm and exploits the stability of obtained results to rank the contribution of each diagnosis/procedure in terms of presence in a Markov Blanket for outcome prediction. Results: Presence of target diagnoses acute myocardial infarction (AMI), congestive heart failure (CHF), pneumonia (PN), and Sepsis in hospital discharge records for Medicare and Medicaid patients in California and New York state hospitals (2009–2011), were predicted using models trained on a subset of California state hospitals (2003–2008). Using repeated holdout evaluation, we used ~30,000,000 hospital discharge records and analyzed the stability of the proposed approach. Model performance was measured using the Area Under the ROC Curve (AUC) metric, and importance and contribution of single features to the final result. The results varied from AUC=0.68 (with SE<1e-4) for PN on cross validation datasets to AUC=0.94, with (SE<1e-7) for Sepsis on California hospitals (2009 – 2011), while the stability of features was consistently better with more training data for each target diagnosis. Prediction accuracy for considered target diagnoses approaches or exceeds accuracy estimates for discharge record data. Conclusions: This paper presents a novel approach to identifying a small subset of relevant diagnoses and procedures that approximate the Markov Blanket for target diagnoses. Accuracy and interpretability of results demonstrate the potential of our approach. PMID:26958243

  11. Embodying Markov blankets. Comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Désormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Pezzulo, Giovanni; Levin, Michael

    2018-03-01

    The free-energy principle (FEP) has been initially proposed as a theory of brain structure and function [1], but its scope is rapidly extending to explain biological phenomena at multiple levels of complexity, from simple life forms and their morphology [2] to complex societal and cultural dynamics [3].

  12. Study on the temperature control mechanism of the tritium breeding blanket for CFETR

    NASA Astrophysics Data System (ADS)

    Liu, Changle; Qiu, Yang; Zhang, Jie; Zhang, Jianzhong; Li, Lei; Yao, Damao; Li, Guoqiang; Gao, Xiang; Wu, Songtao; Wan, Yuanxi

    2017-12-01

    The Chinese fusion engineering testing reactor (CFETR) will demonstrate tritium self- sufficiency using a tritium breeding blanket for the tritium fuel cycle. The temperature control mechanism (TCM) involves the tritium production of the breeding blanket and has an impact on tritium self-sufficiency. In this letter, the CFETR tritium target is addressed according to its missions. TCM research on the neutronics and thermal hydraulics issues for the CFETR blanket is presented. The key concerns regarding the blanket design for tritium production under temperature field control are depicted. A systematic theory on the TCM is established based on a multiplier blanket model. In particular, a closed-loop method is developed for the mechanism with universal function solutions, which is employed in the CFETR blanket design activity for tritium production. A tritium accumulation phenomenon is found close to the coolant in the blanket interior, which has a very important impact on current blanket concepts using water coolant inside the blanket. In addition, an optimal tritium breeding ratio (TBR) method based on the TCM is proposed, combined with thermal hydraulics and finite element technology. Meanwhile, the energy gain factor is adopted to estimate neutron heat deposition, which is a key parameter relating to the blanket TBR calculations, considering the structural factors. This work will benefit breeding blanket engineering for the CFETR reactor in the future.

  13. An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes

    ERIC Educational Resources Information Center

    Kapland, David

    2008-01-01

    This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…

  14. Life and Understanding: The Origins of “Understanding” in Self-Organizing Nervous Systems

    PubMed Central

    Yufik, Yan M.; Friston, Karl

    2016-01-01

    This article is motivated by a formulation of biotic self-organization in Friston (2013), where the emergence of “life” in coupled material entities (e.g., macromolecules) was predicated on bounded subsets that maintain a degree of statistical independence from the rest of the network. Boundary elements in such systems constitute a Markov blanket; separating the internal states of a system from its surrounding states. In this article, we ask whether Markov blankets operate in the nervous system and underlie the development of intelligence, enabling a progression from the ability to sense the environment to the ability to understand it. Markov blankets have been previously hypothesized to form in neuronal networks as a result of phase transitions that cause network subsets to fold into bounded assemblies, or packets (Yufik and Sheridan, 1997; Yufik, 1998a). The ensuing neuronal packets hypothesis builds on the notion of neuronal assemblies (Hebb, 1949, 1980), treating such assemblies as flexible but stable biophysical structures capable of withstanding entropic erosion. In other words, structures that maintain their integrity under changing conditions. In this treatment, neuronal packets give rise to perception of “objects”; i.e., quasi-stable (stimulus bound) feature groupings that are conserved over multiple presentations (e.g., the experience of perceiving “apple” can be interrupted and resumed many times). Monitoring the variations in such groups enables the apprehension of behavior; i.e., attributing to objects the ability to undergo changes without loss of self-identity. Ultimately, “understanding” involves self-directed composition and manipulation of the ensuing “mental models” that are constituted by neuronal packets, whose dynamics capture relationships among objects: that is, dependencies in the behavior of objects under varying conditions. For example, movement is known to involve rotation of population vectors in the motor cortex (Georgopoulos et al., 1988, 1993). The neuronal packet hypothesis associates “understanding” with the ability to detect and generate coordinated rotation of population vectors—in neuronal packets—in associative cortex and other regions in the brain. The ability to coordinate vector representations in this way is assumed to have developed in conjunction with the ability to postpone overt motor expression of implicit movement, thus creating a mechanism for prediction and behavioral optimization via mental modeling that is unique to higher species. This article advances the notion that Markov blankets—necessary for the emergence of life—have been subsequently exploited by evolution and thus ground the ways that living organisms adapt to their environment, culminating in their ability to understand it. PMID:28018185

  15. In Silico Syndrome Prediction for Coronary Artery Disease in Traditional Chinese Medicine

    PubMed Central

    Lu, Peng; Chen, Jianxin; Zhao, Huihui; Gao, Yibo; Luo, Liangtao; Zuo, Xiaohan; Shi, Qi; Yang, Yiping; Yi, Jianqiang; Wang, Wei

    2012-01-01

    Coronary artery disease (CAD) is the leading causes of deaths in the world. The differentiation of syndrome (ZHENG) is the criterion of diagnosis and therapeutic in TCM. Therefore, syndrome prediction in silico can be improving the performance of treatment. In this paper, we present a Bayesian network framework to construct a high-confidence syndrome predictor based on the optimum subset, that is, collected by Support Vector Machine (SVM) feature selection. Syndrome of CAD can be divided into asthenia and sthenia syndromes. According to the hierarchical characteristics of syndrome, we firstly label every case three types of syndrome (asthenia, sthenia, or both) to solve several syndromes with some patients. On basis of the three syndromes' classes, we design SVM feature selection to achieve the optimum symptom subset and compare this subset with Markov blanket feature select using ROC. Using this subset, the six predictors of CAD's syndrome are constructed by the Bayesian network technique. We also design Naïve Bayes, C4.5 Logistic, Radial basis function (RBF) network compared with Bayesian network. In a conclusion, the Bayesian network method based on the optimum symptoms shows a practical method to predict six syndromes of CAD in TCM. PMID:22567030

  16. Cultural Markov blankets? Mind the other minds gap!. Comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Désormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Veissière, Samuel

    2018-03-01

    Ramstead et al. have pulled an impressive feat. By combining recent developments in evolutionary systems theory (EST), machine learning, and theoretical biology, they seek to apply the free-energy principle (FEP) to tackle one of the most intractable questions in the physics of life: why and how do living systems resist the second law of thermodynamics and maintain themselves in a state of bounded organization? The authors expand on a formal model of neuronal self-organization to articulate a meta-theory of perception, action, and biobehaviour that they extend from the human brain and mind to body and society. They call this model "variational neuroethology" [1]. The basic idea is simple and elegant: living systems self-organize optimally by resisting internal entropy; that is, by minimizing free-energy. The model draws on, and significantly expands on Bayesian predictive-processing (PP) theories of cognition, according to which the brain generates statistical predictions of the environment based on prior learning, and guides behaviour by working optimally to minimise prediction errors. In the neuroethology account, free energy is understood as "a function of probabilistic beliefs" encoded in an organism's internal states about external states of the world. The model thus rejoins 'enactivist' and 'affordances' accounts in phenomenology and ecological psychology, in which 'reality' for a living organism is understood as perspective-dependent, and constructed from an agent's prior dispositions ("probabilistic beliefs" in Bayesian terms). In ecological terms, an organism operates in a niche within what its dispositions in relation to features of the environment 'afford'. Ramstead et al. borrow the concept of Markov Blanket from mathematics to describe the processing of internal states and beliefs through which an organism perceives its environment. In machine learning, a Markov Blank is a learning algorithm consisting of a network of nested 'parent' and 'children' nodes for hierarchical information processing. Ramstead et al. take up this model to describe the perceptive 'veil' through which human sensory states are coupled to affordances of the broader environment. Building on the recently formulated cultural affordances paradigm, the authors extend their model to a meta-theory of the human niche, in which "cultural ensembles minimise free energy by enculturing their members so that they share common sets of precision-weighting priors". Ramstead et al. propose to enrich the cultural affordances account by bringing in the hierarchical mechanistic mind (HMM) model, which assumes the free-energy principle as a general mechanism underpinning cognitive function on evolutionary, developmental, and real-time scales. They concede, however, that ways of further integrating the HMM with cultural affordances remain an open question. As a cognitive anthropologist and co-author of the first Cultural Affordances article [2], I am happy to provide the outline of an answer. For humans, affordances are mediated through recursive loops between natural features of the environment and human conventions. A chair, for example, affords sitting for bipedal agents. This is 'natural' enough. But for humans, chairs afford sitting and not-sitting in myriad context and status-specific ways. A throne affords not-sitting for all but the monarch. In the absence of the monarch, it may afford transgressive sitting for the most daring. How do these conventional affordances come to hold with such precision? In the original model, we defined culture as collectively patterned and mutually reinforced behaviour mediated by largely implicit expectations about what one expects others to also expect - and to expect of one by extension. Environmental cues may act as triggers of affordances, but joint meta-expectations do all the mediating work. Meaning and affordances in the environment of the Homo Sapiens niche, are mostly (if not exclusively) picked up through the 'veil' of what one expects others to expect. The Markov Blanket in the human niche (the cultural Markov Blanket), thus, serves as a buffer to exploit statistical regularities in human psychology at least as much, if not more than in external states of the world. Human internal states about external states, in other words, are mediated by expectations about other humans' internal states. The nestedness of these inferences should be primarily conceptualized at the level of recursive mindreading - or inferences about other humans' internal states (about both internal and external states), dispositions, anticipations, and propositional attitudes. In order to function optimally and minimise cognitive energy in any given context, I have to know that you [the context-relevant other, actual or generalized] know that I know that you know that I know, etc. how to behave in that context. Navigating social life and cultural affordances requires the smooth acquisition, processing, and constant updating of infinitely recursive inferences about many specific, generalized, and hypothetical other minds. It might be useful to specify, thus, that the cultural Markov Blanket is one that mediates world-agent perception and action through the veil of Other Minds.

  17. Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data

    DTIC Science & Technology

    2015-07-01

    Efficient Algorithms for Bayesian Network Parameter Learning from Incomplete Data Guy Van den Broeck∗ and Karthika Mohan∗ and Arthur Choi and Adnan ...notwithstanding any other provision of law , no person shall be subject to a penalty for failing to comply with a collection of information if it does...Wasserman, L. (2011). All of Statistics. Springer Science & Business Media. Yaramakala, S., & Margaritis, D. (2005). Speculative markov blanket discovery for optimal feature selection. In Proceedings of ICDM.

  18. Weighted Markov chains for forecasting and analysis in Incidence of infectious diseases in jiangsu Province, China☆

    PubMed Central

    Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng

    2010-01-01

    This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology. PMID:23554632

  19. Weighted Markov chains for forecasting and analysis in Incidence of infectious diseases in jiangsu Province, China.

    PubMed

    Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng

    2010-05-01

    This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology.

  20. A Bayesian network model for predicting aquatic toxicity mode ...

    EPA Pesticide Factsheets

    The mode of toxic action (MoA) has been recognized as a key determinant of chemical toxicity but MoA classification in aquatic toxicology has been limited. We developed a Bayesian network model to classify aquatic toxicity mode of action using a recently published dataset containing over one thousand chemicals with MoA assignments for aquatic animal toxicity. Two dimensional theoretical chemical descriptors were generated for each chemical using the Toxicity Estimation Software Tool. The model was developed through augmented Markov blanket discovery from the data set with the MoA broad classifications as a target node. From cross validation, the overall precision for the model was 80.2% with a R2 of 0.959. The best precision was for the AChEI MoA (93.5%) where 257 chemicals out of 275 were correctly classified. Model precision was poorest for the reactivity MoA (48.5%) where 48 out of 99 reactive chemicals were correctly classified. Narcosis represented the largest class within the MoA dataset and had a precision and reliability of 80.0%, reflecting the global precision across all of the MoAs. False negatives for narcosis most often fell into electron transport inhibition, neurotoxicity or reactivity MoAs. False negatives for all other MoAs were most often narcosis. A probabilistic sensitivity analysis was undertaken for each MoA to examine the sensitivity to individual and multiple descriptor findings. The results show that the Markov blanket of a structurally

  1. Sampling rare fluctuations of discrete-time Markov chains

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  2. Sampling rare fluctuations of discrete-time Markov chains.

    PubMed

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  3. Invited commentary: Lost in estimation--searching for alternatives to markov chains to fit complex Bayesian models.

    PubMed

    Molitor, John

    2012-03-01

    Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.

  4. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  5. Developing a statistically powerful measure for quartet tree inference using phylogenetic identities and Markov invariants.

    PubMed

    Sumner, Jeremy G; Taylor, Amelia; Holland, Barbara R; Jarvis, Peter D

    2017-12-01

    Recently there has been renewed interest in phylogenetic inference methods based on phylogenetic invariants, alongside the related Markov invariants. Broadly speaking, both these approaches give rise to polynomial functions of sequence site patterns that, in expectation value, either vanish for particular evolutionary trees (in the case of phylogenetic invariants) or have well understood transformation properties (in the case of Markov invariants). While both approaches have been valued for their intrinsic mathematical interest, it is not clear how they relate to each other, and to what extent they can be used as practical tools for inference of phylogenetic trees. In this paper, by focusing on the special case of binary sequence data and quartets of taxa, we are able to view these two different polynomial-based approaches within a common framework. To motivate the discussion, we present three desirable statistical properties that we argue any invariant-based phylogenetic method should satisfy: (1) sensible behaviour under reordering of input sequences; (2) stability as the taxa evolve independently according to a Markov process; and (3) explicit dependence on the assumption of a continuous-time process. Motivated by these statistical properties, we develop and explore several new phylogenetic inference methods. In particular, we develop a statistically bias-corrected version of the Markov invariants approach which satisfies all three properties. We also extend previous work by showing that the phylogenetic invariants can be implemented in such a way as to satisfy property (3). A simulation study shows that, in comparison to other methods, our new proposed approach based on bias-corrected Markov invariants is extremely powerful for phylogenetic inference. The binary case is of particular theoretical interest as-in this case only-the Markov invariants can be expressed as linear combinations of the phylogenetic invariants. A wider implication of this is that, for models with more than two states-for example DNA sequence alignments with four-state models-we find that methods which rely on phylogenetic invariants are incapable of satisfying all three of the stated statistical properties. This is because in these cases the relevant Markov invariants belong to a class of polynomials independent from the phylogenetic invariants.

  6. Development of a Flammability Test Method for Aircraft Blankets

    DOT National Transportation Integrated Search

    1996-03-01

    Flammability testing of aircraft blankets was conducted in order to develop a fire performance test method and performance criteria for blankets supplied to commercial aircraft operators. Aircraft blankets were subjected to vertical Bunsen burner tes...

  7. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    NASA Astrophysics Data System (ADS)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  8. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  9. Cascade heterogeneous face sketch-photo synthesis via dual-scale Markov Network

    NASA Astrophysics Data System (ADS)

    Yao, Saisai; Chen, Zhenxue; Jia, Yunyi; Liu, Chengyun

    2018-03-01

    Heterogeneous face sketch-photo synthesis is an important and challenging task in computer vision, which has widely applied in law enforcement and digital entertainment. According to the different synthesis results based on different scales, this paper proposes a cascade sketch-photo synthesis method via dual-scale Markov Network. Firstly, Markov Network with larger scale is used to synthesise the initial sketches and the local vertical and horizontal neighbour search (LVHNS) method is used to search for the neighbour patches of test patches in training set. Then, the initial sketches and test photos are jointly entered into smaller scale Markov Network. Finally, the fine sketches are obtained after cascade synthesis process. Extensive experimental results on various databases demonstrate the superiority of the proposed method compared with several state-of-the-art methods.

  10. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Detecting critical state before phase transition of complex systems by hidden Markov model

    NASA Astrophysics Data System (ADS)

    Liu, Rui; Chen, Pei; Li, Yongjun; Chen, Luonan

    Identifying the critical state or pre-transition state just before the occurrence of a phase transition is a challenging task, because the state of the system may show little apparent change before this critical transition during the gradual parameter variations. Such dynamics of phase transition is generally composed of three stages, i.e., before-transition state, pre-transition state, and after-transition state, which can be considered as three different Markov processes. Thus, based on this dynamical feature, we present a novel computational method, i.e., hidden Markov model (HMM), to detect the switching point of the two Markov processes from the before-transition state (a stationary Markov process) to the pre-transition state (a time-varying Markov process), thereby identifying the pre-transition state or early-warning signals of the phase transition. To validate the effectiveness, we apply this method to detect the signals of the imminent phase transitions of complex systems based on the simulated datasets, and further identify the pre-transition states as well as their critical modules for three real datasets, i.e., the acute lung injury triggered by phosgene inhalation, MCF-7 human breast cancer caused by heregulin, and HCV-induced dysplasia and hepatocellular carcinoma.

  12. Scalable approximate policies for Markov decision process models of hospital elective admissions.

    PubMed

    Zhu, George; Lizotte, Dan; Hoey, Jesse

    2014-05-01

    To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Intelligent postoperative morbidity prediction of heart disease using artificial intelligence techniques.

    PubMed

    Hsieh, Nan-Chen; Hung, Lun-Ping; Shih, Chun-Che; Keh, Huan-Chao; Chan, Chien-Hui

    2012-06-01

    Endovascular aneurysm repair (EVAR) is an advanced minimally invasive surgical technology that is helpful for reducing patients' recovery time, postoperative morbidity and mortality. This study proposes an ensemble model to predict postoperative morbidity after EVAR. The ensemble model was developed using a training set of consecutive patients who underwent EVAR between 2000 and 2009. All data required for prediction modeling, including patient demographics, preoperative, co-morbidities, and complication as outcome variables, was collected prospectively and entered into a clinical database. A discretization approach was used to categorize numerical values into informative feature space. Then, the Bayesian network (BN), artificial neural network (ANN), and support vector machine (SVM) were adopted as base models, and stacking combined multiple models. The research outcomes consisted of an ensemble model to predict postoperative morbidity after EVAR, the occurrence of postoperative complications prospectively recorded, and the causal effect knowledge by BNs with Markov blanket concept.

  14. Effective Thermal Property Estimation of Unitary Pebble Beds Based on a CFD-DEM Coupled Method for a Fusion Blanket

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Chen, Youhua; Huang, Kai; Liu, Songlin

    2015-12-01

    Lithium ceramic pebble beds have been considered in the solid blanket design for fusion reactors. To characterize the fusion solid blanket thermal performance, studies of the effective thermal properties, i.e. the effective thermal conductivity and heat transfer coefficient, of the pebble beds are necessary. In this paper, a 3D computational fluid dynamics discrete element method (CFD-DEM) coupled numerical model was proposed to simulate heat transfer and thereby estimate the effective thermal properties. The DEM was applied to produce a geometric topology of a prototypical blanket pebble bed by directly simulating the contact state of each individual particle using basic interaction laws. Based on this geometric topology, a CFD model was built to analyze the temperature distribution and obtain the effective thermal properties. The current numerical model was shown to be in good agreement with the existing experimental data for effective thermal conductivity available in the literature. supported by National Special Project for Magnetic Confined Nuclear Fusion Energy of China (Nos. 2013GB108004, 2015GB108002, 2014GB122000 and 2014GB119000), and National Natural Science Foundation of China (No. 11175207)

  15. Driving style recognition method using braking characteristics based on hidden Markov model

    PubMed Central

    Wu, Chaozhong; Lyu, Nengchao; Huang, Zhen

    2017-01-01

    Since the advantage of hidden Markov model in dealing with time series data and for the sake of identifying driving style, three driving style (aggressive, moderate and mild) are modeled reasonably through hidden Markov model based on driver braking characteristics to achieve efficient driving style. Firstly, braking impulse and the maximum braking unit area of vacuum booster within a certain time are collected from braking operation, and then general braking and emergency braking characteristics are extracted to code the braking characteristics. Secondly, the braking behavior observation sequence is used to describe the initial parameters of hidden Markov model, and the generation of the hidden Markov model for differentiating and an observation sequence which is trained and judged by the driving style is introduced. Thirdly, the maximum likelihood logarithm could be implied from the observable parameters. The recognition accuracy of algorithm is verified through experiments and two common pattern recognition algorithms. The results showed that the driving style discrimination based on hidden Markov model algorithm could realize effective discriminant of driving style. PMID:28837580

  16. Fuzzy Markov random fields versus chains for multispectral image segmentation.

    PubMed

    Salzenstein, Fabien; Collet, Christophe

    2006-11-01

    This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.

  17. Asteroid mass estimation with Markov-chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Siltala, L.; Granvik, M.

    2017-09-01

    We have developed a new Markov-chain Monte Carlo-based algorithm for asteroid mass estimation based on mutual encounters and tested it for several different asteroids. Our results are in line with previous literature values but suggest that uncertainties of prior estimates may be misleading as a consequence of using linearized methods.

  18. Numerical methods in Markov chain modeling

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef; Stewart, William J.

    1989-01-01

    Several methods for computing stationary probability distributions of Markov chains are described and compared. The main linear algebra problem consists of computing an eigenvector of a sparse, usually nonsymmetric, matrix associated with a known eigenvalue. It can also be cast as a problem of solving a homogeneous singular linear system. Several methods based on combinations of Krylov subspace techniques are presented. The performance of these methods on some realistic problems are compared.

  19. Classification of customer lifetime value models using Markov chain

    NASA Astrophysics Data System (ADS)

    Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi

    2017-10-01

    A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.

  20. Reliability Analysis of the Electrical Control System of Subsea Blowout Preventers Using Markov Models

    PubMed Central

    Liu, Zengkai; Liu, Yonghong; Cai, Baoping

    2014-01-01

    Reliability analysis of the electrical control system of a subsea blowout preventer (BOP) stack is carried out based on Markov method. For the subsea BOP electrical control system used in the current work, the 3-2-1-0 and 3-2-0 input voting schemes are available. The effects of the voting schemes on system performance are evaluated based on Markov models. In addition, the effects of failure rates of the modules and repair time on system reliability indices are also investigated. PMID:25409010

  1. A Markov Chain-based quantitative study of angular distribution of photons through turbid slabs via isotropic light scattering

    NASA Astrophysics Data System (ADS)

    Li, Xuesong; Northrop, William F.

    2016-04-01

    This paper describes a quantitative approach to approximate multiple scattering through an isotropic turbid slab based on Markov Chain theorem. There is an increasing need to utilize multiple scattering for optical diagnostic purposes; however, existing methods are either inaccurate or computationally expensive. Here, we develop a novel Markov Chain approximation approach to solve multiple scattering angular distribution (AD) that can accurately calculate AD while significantly reducing computational cost compared to Monte Carlo simulation. We expect this work to stimulate ongoing multiple scattering research and deterministic reconstruction algorithm development with AD measurements.

  2. Phase 3 experiments of the JAERI/USDOE collaborative program on fusion blanket neutronics. Volume 1: Experiment

    NASA Astrophysics Data System (ADS)

    Oyama, Yukio; Konno, Chikara; Ikeda, Yujiro; Maekawa, Fujio; Kosako, Kazuaki; Nakamura, Tomoo; Maekawa, Hiroshi; Youssef, Mahmoud Z.; Kumar, Anil; Abdou, Mohamed A.

    1994-02-01

    A pseudo-line source has been realized by using an accelerator based D-T point neutron source. The pseudo-line source is obtained by time averaging of continuously moving point source or by superposition of finely distributed point sources. The line source is utilized for fusion blanket neutronics experiments with an annular geometry so as to simulate a part of a tokamak reactor. The source neutron characteristics were measured for two operational modes for the line source, continuous and step-wide modes, with the activation foil and the NE213 detectors, respectively. In order to give a source condition for a successive calculational analysis on the annular blanket experiment, the neutron source characteristics was calculated by a Monte Carlo code. The reliability of the Monte Carlo calculation was confirmed by comparison with the measured source characteristics. The shape of the annular blanket system was a rectangular with an inner cavity. The annular blanket was consist of 15 mm-thick first wall (SS304) and 406 mm-thick breeder zone with Li2O at inside and Li2CO3 at outside. The line source was produced at the center of the inner cavity by moving the annular blanket system in the span of 2 m. Three annular blanket configurations were examined; the reference blanket, the blanket covered with 25 mm thick graphite armor and the armor-blanket with a large opening. The neutronics parameters of tritium production rate, neutron spectrum and activation reaction rate were measured with specially developed techniques such as multi-detector data acquisition system, spectrum weighting function method and ramp controlled high voltage system. The present experiment provides unique data for a higher step of benchmark to test a reliability of neutronics design calculation for a realistic tokamak reactor.

  3. spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains

    NASA Astrophysics Data System (ADS)

    Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo

    2016-09-01

    The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.

  4. A Monte-Carlo method which is not based on Markov chain algorithm, used to study electrostatic screening of ion potential

    NASA Astrophysics Data System (ADS)

    Šantić, Branko; Gracin, Davor

    2017-12-01

    A new simple Monte Carlo method is introduced for the study of electrostatic screening by surrounding ions. The proposed method is not based on the generally used Markov chain method for sample generation. Each sample is pristine and there is no correlation with other samples. As the main novelty, the pairs of ions are gradually added to a sample provided that the energy of each ion is within the boundaries determined by the temperature and the size of ions. The proposed method provides reliable results, as demonstrated by the screening of ion in plasma and in water.

  5. Document Ranking Based upon Markov Chains.

    ERIC Educational Resources Information Center

    Danilowicz, Czeslaw; Balinski, Jaroslaw

    2001-01-01

    Considers how the order of documents in information retrieval responses are determined and introduces a method that uses a probabilistic model of a document set where documents are regarded as states of a Markov chain and where transition probabilities are directly proportional to similarities between documents. (Author/LRW)

  6. Variable context Markov chains for HIV protease cleavage site prediction.

    PubMed

    Oğul, Hasan

    2009-06-01

    Deciphering the knowledge of HIV protease specificity and developing computational tools for detecting its cleavage sites in protein polypeptide chain are very desirable for designing efficient and specific chemical inhibitors to prevent acquired immunodeficiency syndrome. In this study, we developed a generative model based on a generalization of variable order Markov chains (VOMC) for peptide sequences and adapted the model for prediction of their cleavability by certain proteases. The new method, called variable context Markov chains (VCMC), attempts to identify the context equivalence based on the evolutionary similarities between individual amino acids. It was applied for HIV-1 protease cleavage site prediction problem and shown to outperform existing methods in terms of prediction accuracy on a common dataset. In general, the method is a promising tool for prediction of cleavage sites of all proteases and encouraged to be used for any kind of peptide classification problem as well.

  7. Sentiment classification technology based on Markov logic networks

    NASA Astrophysics Data System (ADS)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  8. Density Control of Multi-Agent Systems with Safety Constraints: A Markov Chain Approach

    NASA Astrophysics Data System (ADS)

    Demirer, Nazli

    The control of systems with autonomous mobile agents has been a point of interest recently, with many applications like surveillance, coverage, searching over an area with probabilistic target locations or exploring an area. In all of these applications, the main goal of the swarm is to distribute itself over an operational space to achieve mission objectives specified by the density of swarm. This research focuses on the problem of controlling the distribution of multi-agent systems considering a hierarchical control structure where the whole swarm coordination is achieved at the high-level and individual vehicle/agent control is managed at the low-level. High-level coordination algorithms uses macroscopic models that describes the collective behavior of the whole swarm and specify the agent motion commands, whose execution will lead to the desired swarm behavior. The low-level control laws execute the motion to follow these commands at the agent level. The main objective of this research is to develop high-level decision control policies and algorithms to achieve physically realizable commanding of the agents by imposing mission constraints on the distribution. We also make some connections with decentralized low-level motion control. This dissertation proposes a Markov chain based method to control the density distribution of the whole system where the implementation can be achieved in a decentralized manner with no communication between agents since establishing communication with large number of agents is highly challenging. The ultimate goal is to guide the overall density distribution of the system to a prescribed steady-state desired distribution while satisfying desired transition and safety constraints. Here, the desired distribution is determined based on the mission requirements, for example in the application of area search, the desired distribution should match closely with the probabilistic target locations. The proposed method is applicable for both systems with a single agent and systems with large number of agents due to the probabilistic nature, where the probability distribution of each agent's state evolves according to a finite-state and discrete-time Markov chain (MC). Hence, designing proper decision control policies requires numerically tractable solution methods for the synthesis of Markov chains. The synthesis problem has the form of a Linear Matrix Inequality Problem (LMI), with LMI formulation of the constraints. To this end, we propose convex necessary and sufficient conditions for safety constraints in Markov chains, which is a novel result in the Markov chain literature. In addition to LMI-based, offline, Markov matrix synthesis method, we also propose a QP-based, online, method to compute a time-varying Markov matrix based on the real-time density feedback. Both problems are convex optimization problems that can be solved in a reliable and tractable way, utilizing existing tools in the literature. A Low Earth Orbit (LEO) swarm simulations are presented to validate the effectiveness of the proposed algorithms. Another problem tackled as a part of this research is the generalization of the density control problem to autonomous mobile agents with two control modes: ON and OFF. Here, each mode consists of a (possibly overlapping) finite set of actions, that is, there exist a set of actions for the ON mode and another set for the OFF mode. We give formulation for a new Markov chain synthesis problem, with additional measurements for the state transitions, where a policy is designed to ensure desired safety and convergence properties for the underlying Markov chain.

  9. Comparison of measured and calculated composition of irradiated EBR-II blanket assemblies.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimm, K. N.

    1998-07-13

    In anticipation of processing irradiated EBR-II depleted uranium blanket subassemblies in the Fuel Conditioning Facility (FCF) at ANL-West, it has been possible to obtain a limited set of destructive chemical analyses of samples from a single EBR-II blanket subassembly. Comparison of calculated values with these measurements is being used to validate a depletion methodology based on a limited number of generic models of EBR-II to simulate the irradiation history of these subassemblies. Initial comparisons indicate these methods are adequate to meet the operations and material control and accountancy (MC and A) requirements for the FCF, but also indicate several shortcomingsmore » which may be corrected or improved.« less

  10. Source-to-incident-flux relation in a Tokamak blanket module

    NASA Astrophysics Data System (ADS)

    Imel, G. R.

    The next-generation Tokamak experiments, including the Tokamak fusion test reactor (TFTR), will utilize small blanket modules to measure performance parameters such as tritium breeding profiles, power deposition profiles, and neutron flux profiles. Specifically, a neutron calorimeter (simply a neutron moderating blanket module) which permits inferring the incident 14 MeV flux based on measured temperature profiles was proposed for TFTR. The problem of how to relate this total scalar flux to the fusion neutron source is addressed. This relation is necessary since the calorimeter is proposed as a total fusion energy monitor. The methods and assumptions presented was valid for the TFTR Lithium Breeding Module (LBM), as well as other modules on larger Tokamak reactors.

  11. A hybrid degradation tendency measurement method for mechanical equipment based on moving window and Grey-Markov model

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han

    2017-11-01

    Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.

  12. Synchronization of discrete-time neural networks with delays and Markov jump topologies based on tracker information.

    PubMed

    Yang, Xinsong; Feng, Zhiguo; Feng, Jianwen; Cao, Jinde

    2017-01-01

    In this paper, synchronization in an array of discrete-time neural networks (DTNNs) with time-varying delays coupled by Markov jump topologies is considered. It is assumed that the switching information can be collected by a tracker with a certain probability and transmitted from the tracker to controller precisely. Then the controller selects suitable control gains based on the received switching information to synchronize the network. This new control scheme makes full use of received information and overcomes the shortcomings of mode-dependent and mode-independent control schemes. Moreover, the proposed control method includes both the mode-dependent and mode-independent control techniques as special cases. By using linear matrix inequality (LMI) method and designing new Lyapunov functionals, delay-dependent conditions are derived to guarantee that the DTNNs with Markov jump topologies to be asymptotically synchronized. Compared with existing results on Markov systems which are obtained by separately using mode-dependent and mode-independent methods, our result has great flexibility in practical applications. Numerical simulations are finally given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain

    PubMed Central

    Dai, Yonghui; Han, Dongmei; Dai, Weihui

    2014-01-01

    The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market. PMID:24782659

  14. Surgical motion characterization in simulated needle insertion procedures

    NASA Astrophysics Data System (ADS)

    Holden, Matthew S.; Ungi, Tamas; Sargent, Derek; McGraw, Robert C.; Fichtinger, Gabor

    2012-02-01

    PURPOSE: Evaluation of surgical performance in image-guided needle insertions is of emerging interest, to both promote patient safety and improve the efficiency and effectiveness of training. The purpose of this study was to determine if a Markov model-based algorithm can more accurately segment a needle-based surgical procedure into its five constituent tasks than a simple threshold-based algorithm. METHODS: Simulated needle trajectories were generated with known ground truth segmentation by a synthetic procedural data generator, with random noise added to each degree of freedom of motion. The respective learning algorithms were trained, and then tested on different procedures to determine task segmentation accuracy. In the threshold-based algorithm, a change in tasks was detected when the needle crossed a position/velocity threshold. In the Markov model-based algorithm, task segmentation was performed by identifying the sequence of Markov models most likely to have produced the series of observations. RESULTS: For amplitudes of translational noise greater than 0.01mm, the Markov model-based algorithm was significantly more accurate in task segmentation than the threshold-based algorithm (82.3% vs. 49.9%, p<0.001 for amplitude 10.0mm). For amplitudes less than 0.01mm, the two algorithms produced insignificantly different results. CONCLUSION: Task segmentation of simulated needle insertion procedures was improved by using a Markov model-based algorithm as opposed to a threshold-based algorithm for procedures involving translational noise.

  15. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  16. Probabilistic inference using linear Gaussian importance sampling for hybrid Bayesian networks

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Chang, K. C.

    2005-05-01

    Probabilistic inference for Bayesian networks is in general NP-hard using either exact algorithms or approximate methods. However, for very complex networks, only the approximate methods such as stochastic sampling could be used to provide a solution given any time constraint. There are several simulation methods currently available. They include logic sampling (the first proposed stochastic method for Bayesian networks, the likelihood weighting algorithm) the most commonly used simulation method because of its simplicity and efficiency, the Markov blanket scoring method, and the importance sampling algorithm. In this paper, we first briefly review and compare these available simulation methods, then we propose an improved importance sampling algorithm called linear Gaussian importance sampling algorithm for general hybrid model (LGIS). LGIS is aimed for hybrid Bayesian networks consisting of both discrete and continuous random variables with arbitrary distributions. It uses linear function and Gaussian additive noise to approximate the true conditional probability distribution for continuous variable given both its parents and evidence in a Bayesian network. One of the most important features of the newly developed method is that it can adaptively learn the optimal important function from the previous samples. We test the inference performance of LGIS using a 16-node linear Gaussian model and a 6-node general hybrid model. The performance comparison with other well-known methods such as Junction tree (JT) and likelihood weighting (LW) shows that LGIS-GHM is very promising.

  17. Improved multilayer insulation applications. [spacecraft thermal control

    NASA Technical Reports Server (NTRS)

    Mikk, G.

    1982-01-01

    Multilayer insulation blankets used for the attenuation of radiant heat transfer in spacecraft are addressed. Typically, blanket effectiveness is degraded by heat leaks in the joints between adjacent blankets and by heat leaks caused by the blanket fastener system. An approach to blanket design based upon modular sub-blankets with distributed seams and upon an associated fastener system that practically eliminates the through-the-blanket conductive path is described. Test results are discussed providing confirmation of the approach. The specific case of the thermal control system for the optical assembly of the Space Telescope is examined.

  18. Parallel algorithms for simulating continuous time Markov chains

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Heidelberger, Philip

    1992-01-01

    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors.

  19. 48 CFR 13.303 - Blanket purchase agreements (BPAs).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Blanket purchase agreements (BPAs). 13.303 Section 13.303 Federal Acquisition Regulations System FEDERAL ACQUISITION... Methods 13.303 Blanket purchase agreements (BPAs). ...

  20. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    PubMed Central

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  1. A Langevin equation for the rates of currency exchange based on the Markov analysis

    NASA Astrophysics Data System (ADS)

    Farahpour, F.; Eskandari, Z.; Bahraminasab, A.; Jafari, G. R.; Ghasemi, F.; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2007-11-01

    We propose a method for analyzing the data for the rates of exchange of various currencies versus the U.S. dollar. The method analyzes the return time series of the data as a Markov process, and develops an effective equation which reconstructs it. We find that the Markov time scale, i.e., the time scale over which the data are Markov-correlated, is one day for the majority of the daily exchange rates that we analyze. We derive an effective Langevin equation to describe the fluctuations in the rates. The equation contains two quantities, D and D, representing the drift and diffusion coefficients, respectively. We demonstrate how the two coefficients are estimated directly from the data, without using any assumptions or models for the underlying stochastic time series that represent the daily rates of exchange of various currencies versus the U.S. dollar.

  2. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    PubMed

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  3. Multiensemble Markov models of molecular thermodynamics and kinetics.

    PubMed

    Wu, Hao; Paul, Fabian; Wehmeyer, Christoph; Noé, Frank

    2016-06-07

    We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models-clustering of high-dimensional spaces and modeling of complex many-state systems-with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein-ligand binding model.

  4. Multiensemble Markov models of molecular thermodynamics and kinetics

    PubMed Central

    Wu, Hao; Paul, Fabian; Noé, Frank

    2016-01-01

    We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models—clustering of high-dimensional spaces and modeling of complex many-state systems—with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein–ligand binding model. PMID:27226302

  5. Multilayer insulation blanket, fabricating apparatus and method

    DOEpatents

    Gonczy, John D.; Niemann, Ralph C.; Boroski, William N.

    1992-01-01

    An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel.

  6. Method of fabricating a multilayer insulation blanket

    DOEpatents

    Gonczy, John D.; Niemann, Ralph C.; Boroski, William N.

    1993-01-01

    An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel.

  7. Method of fabricating a multilayer insulation blanket

    DOEpatents

    Gonczy, J.D.; Niemann, R.C.; Boroski, W.N.

    1993-07-06

    An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel.

  8. Multilayer insulation blanket, fabricating apparatus and method

    DOEpatents

    Gonczy, J.D.; Niemann, R.C.; Boroski, W.N.

    1992-09-01

    An improved multilayer insulation blanket for insulating cryogenic structures operating at very low temperatures is disclosed. An apparatus and method for fabricating the improved blanket are also disclosed. In the improved blanket, each successive layer of insulating material is greater in length and width than the preceding layer so as to accommodate thermal contraction of the layers closest to the cryogenic structure. The fabricating apparatus has a rotatable cylindrical mandrel having an outer surface of fixed radius that is substantially arcuate, preferably convex, in cross-section. The method of fabricating the improved blanket comprises (a) winding a continuous sheet of thermally reflective material around the circumference of the mandrel to form multiple layers, (b) binding the layers along two lines substantially parallel to the edges of the circumference of the mandrel, (c) cutting the layers along a line parallel to the axle of the mandrel, and (d) removing the bound layers from the mandrel. 7 figs.

  9. 48 CFR 213.303 - Blanket purchase agreements (BPAs).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Blanket purchase agreements (BPAs). 213.303 Section 213.303 Federal Acquisition Regulations System DEFENSE ACQUISITION... PROCEDURES Simplified Acquisition Methods 213.303 Blanket purchase agreements (BPAs). ...

  10. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  11. A response to Yu et al. "A forward-backward fragment assembling algorithm for the identification of genomic amplification and deletion breakpoints using high-density single nucleotide polymorphism (SNP) array", BMC Bioinformatics 2007, 8: 145.

    PubMed

    Rueda, Oscar M; Diaz-Uriarte, Ramon

    2007-10-16

    Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).

  12. Damage evaluation by a guided wave-hidden Markov model based method

    NASA Astrophysics Data System (ADS)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  13. Markov stochasticity coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  14. A Proposed Methodology to Control Body Temperature in Patients at Risk of Hypothermia by means of Active Rewarming Systems

    PubMed Central

    Costanzo, Silvia; Cusumano, Alessia; Giaconia, Carlo; Mazzacane, Sante

    2014-01-01

    Hypothermia is a common complication in patients undergoing surgery under general anesthesia. It has been noted that, during the first hour of surgery, the patient's internal temperature (T core) decreases by 0.5–1.5°C due to the vasodilatory effect of anesthetic gases, which affect the body's thermoregulatory system by inhibiting vasoconstriction. Thus a continuous check on patient temperature must be carried out. The currently most used methods to avoid hypothermia are based on passive systems (such as blankets reducing body heat loss) and on active ones (thermal blankets, electric or hot-water mattresses, forced hot air, warming lamps, etc.). Within a broader research upon the environmental conditions, pollution, heat stress, and hypothermia risk in operating theatres, the authors set up an experimental investigation by using a warming blanket chosen from several types on sale. Their aim was to identify times and ways the human body reacts to the heat flowing from the blanket and the blanket's effect on the average temperature T skin and, as a consequence, on T core temperature of the patient. The here proposed methodology could allow surgeons to fix in advance the thermal power to supply through a warming blanket for reaching, in a prescribed time, the desired body temperature starting from a given state of hypothermia. PMID:25485278

  15. Spacecraft thermal blanket cleaning: Vacuum bake of gaseous flow purging

    NASA Technical Reports Server (NTRS)

    Scialdone, John J.

    1990-01-01

    The mass losses and the outgassing rates per unit area of three thermal blankets consisting of various combinations of Mylar and Kapton, with interposed Dacron nets, were measured with a microbalance using two methods. The blankets at 25 deg C were either outgassed in vacuum for 20 hours, or were purged with a dry nitrogen flow of 3 cu. ft. per hour at 25 deg C for 20 hours. The two methods were compared for their effectiveness in cleaning the blankets for their use in space applications. The measurements were carried out using blanket strips and rolled-up blanket samples fitting the microbalance cylindrical plenum. Also, temperature scanning tests were carried out to indicate the optimum temperature for purging and vacuum cleaning. The data indicate that the purging for 20 hours with the above N2 flow can accomplish the same level of cleaning provided by the vacuum with the blankets at 25 deg C for 20 hours, In both cases, the rate of outgassing after 20 hours is reduced by 3 orders of magnitude, and the weight losses are in the range of 10E-4 gr/sq cm. Equivalent mass loss time constants, regained mass in air as a function of time, and other parameters were obtained for those blankets.

  16. Spacecraft thermal blanket cleaning - Vacuum baking or gaseous flow purging

    NASA Technical Reports Server (NTRS)

    Scialdone, John J.

    1992-01-01

    The mass losses and the outgassing rates per unit area of three thermal blankets consisting of various combinations of Mylar and Kapton, with interposed Dacron nets, were measured with a microbalance using two methods. The blankets at 25 deg C were either outgassed in vacuum for 20 hours, or were purged with a dry nitrogen flow of 3 cu. ft. per hour at 25 deg C for 20 hours. The two methods were compared for their effectiveness in cleaning the blankets for their use in space applications. The measurements were carried out using blanket strips and rolled-up blanket samples fitting the microbalance cylindrical plenum. Also, temperature scanning tests were carried out to indicate the optimum temperature for purging and vacuum cleaning. The data indicate that the purging for 20 hours with the above N2 flow can accomplish the same level of cleaning provided by the vacuum with the blankets at 25 deg C for 20 hours. In both cases, the rate of outgassing after 20 hours is reduced by 3 orders of magnitude, and the weight losses are in the range of 10E-4 gr/sq cm. Equivalent mass loss time constants, regained mass in air as a function of time, and other parameters were obtained for those blankets.

  17. MC3: Multi-core Markov-chain Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  18. Bayesian network ensemble as a multivariate strategy to predict radiation pneumonitis risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sangkyu, E-mail: sangkyu.lee@mail.mcgill.ca; Ybarra, Norma; Jeyaseelan, Krishinima

    2015-05-15

    Purpose: Prediction of radiation pneumonitis (RP) has been shown to be challenging due to the involvement of a variety of factors including dose–volume metrics and radiosensitivity biomarkers. Some of these factors are highly correlated and might affect prediction results when combined. Bayesian network (BN) provides a probabilistic framework to represent variable dependencies in a directed acyclic graph. The aim of this study is to integrate the BN framework and a systems’ biology approach to detect possible interactions among RP risk factors and exploit these relationships to enhance both the understanding and prediction of RP. Methods: The authors studied 54 nonsmall-cellmore » lung cancer patients who received curative 3D-conformal radiotherapy. Nineteen RP events were observed (common toxicity criteria for adverse events grade 2 or higher). Serum concentration of the following four candidate biomarkers were measured at baseline and midtreatment: alpha-2-macroglobulin, angiotensin converting enzyme (ACE), transforming growth factor, interleukin-6. Dose-volumetric and clinical parameters were also included as covariates. Feature selection was performed using a Markov blanket approach based on the Koller–Sahami filter. The Markov chain Monte Carlo technique estimated the posterior distribution of BN graphs built from the observed data of the selected variables and causality constraints. RP probability was estimated using a limited number of high posterior graphs (ensemble) and was averaged for the final RP estimate using Bayes’ rule. A resampling method based on bootstrapping was applied to model training and validation in order to control under- and overfit pitfalls. Results: RP prediction power of the BN ensemble approach reached its optimum at a size of 200. The optimized performance of the BN model recorded an area under the receiver operating characteristic curve (AUC) of 0.83, which was significantly higher than multivariate logistic regression (0.77), mean heart dose (0.69), and a pre-to-midtreatment change in ACE (0.66). When RP prediction was made only with pretreatment information, the AUC ranged from 0.76 to 0.81 depending on the ensemble size. Bootstrap validation of graph features in the ensemble quantified confidence of association between variables in the graphs where ten interactions were statistically significant. Conclusions: The presented BN methodology provides the flexibility to model hierarchical interactions between RP covariates, which is applied to probabilistic inference on RP. The authors’ preliminary results demonstrate that such framework combined with an ensemble method can possibly improve prediction of RP under real-life clinical circumstances such as missing data or treatment plan adaptation.« less

  19. Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy

    NASA Astrophysics Data System (ADS)

    Sharma, Sanjib

    2017-08-01

    Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.

  20. 48 CFR 313.303-5 - Purchases under blanket purchase agreements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Purchases under blanket purchase agreements. 313.303-5 Section 313.303-5 Federal Acquisition Regulations System HEALTH AND HUMAN... Methods 313.303-5 Purchases under blanket purchase agreements. (e)(5) HHS personnel that sign delivery...

  1. Phase unwrapping using region-based markov random field model.

    PubMed

    Dong, Ying; Ji, Jim

    2010-01-01

    Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.

  2. Three-dimensional neutronics optimization of helium-cooled blanket for multi-functional experimental fusion-fission hybrid reactor (FDS-MFX)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, J.; Yuan, B.; Jin, M.

    2012-07-01

    Three-dimensional neutronics optimization calculations were performed to analyse the parameters of Tritium Breeding Ratio (TBR) and maximum average Power Density (PDmax) in a helium-cooled multi-functional experimental fusion-fission hybrid reactor named FDS (Fusion-Driven hybrid System)-MFX (Multi-Functional experimental) blanket. Three-stage tests will be carried out successively, in which the tritium breeding blanket, uranium-fueled blanket and spent-fuel-fueled blanket will be utilized respectively. In this contribution, the most significant and main goal of the FDS-MFX blanket is to achieve the PDmax of about 100 MW/m3 with self-sustaining tritium (TBR {>=} 1.05) based on the second-stage test with uranium-fueled blanket to check and validate themore » demonstrator reactor blanket relevant technologies based on the viable fusion and fission technologies. Four different enriched uranium materials were taken into account to evaluate PDmax in subcritical blanket: (i) natural uranium, (ii) 3.2% enriched uranium, (iii) 19.75% enriched uranium, and (iv) 64.4% enriched uranium carbide. These calculations and analyses were performed using a home-developed code VisualBUS and Hybrid Evaluated Nuclear Data Library (HENDL). The results showed that the performance of the blanket loaded with 64.4% enriched uranium was the most attractive and it could be promising to effectively obtain tritium self-sufficiency (TBR-1.05) and a high maximum average power density ({approx}100 MW/m{sup 3}) when the blanket was loaded with the mass of {sup 235}U about 1 ton. (authors)« less

  3. A Hybrid Generalized Hidden Markov Model-Based Condition Monitoring Approach for Rolling Bearings

    PubMed Central

    Liu, Jie; Hu, Youmin; Wu, Bo; Wang, Yan; Xie, Fengyun

    2017-01-01

    The operating condition of rolling bearings affects productivity and quality in the rotating machine process. Developing an effective rolling bearing condition monitoring approach is critical to accurately identify the operating condition. In this paper, a hybrid generalized hidden Markov model-based condition monitoring approach for rolling bearings is proposed, where interval valued features are used to efficiently recognize and classify machine states in the machine process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition (VMD). Parameters of the VMD, in the form of generalized intervals, provide a concise representation for aleatory and epistemic uncertainty and improve the robustness of identification. The multi-scale permutation entropy method is applied to extract state features from the decomposed signals in different operating conditions. Traditional principal component analysis is adopted to reduce feature size and computational cost. With the extracted features’ information, the generalized hidden Markov model, based on generalized interval probability, is used to recognize and classify the fault types and fault severity levels. Finally, the experiment results show that the proposed method is effective at recognizing and classifying the fault types and fault severity levels of rolling bearings. This monitoring method is also efficient enough to quantify the two uncertainty components. PMID:28524088

  4. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  5. Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)

    NASA Astrophysics Data System (ADS)

    Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.

    2018-05-01

    A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.

  6. Key achievements in elementary R&D on water-cooled solid breeder blanket for ITER test blanket module in JAERI

    NASA Astrophysics Data System (ADS)

    Suzuki, S.; Enoeda, M.; Hatano, T.; Hirose, T.; Hayashi, K.; Tanigawa, H.; Ochiai, K.; Nishitani, T.; Tobita, K.; Akiba, M.

    2006-02-01

    This paper presents the significant progress made in the research and development (R&D) of key technologies on the water-cooled solid breeder blanket for the ITER test blanket modules in JAERI. Development of module fabrication technology, bonding technology of armours, measurement of thermo-mechanical properties of pebble beds, neutronics studies on a blanket module mockup and tritium release behaviour from a Li2TiO3 pebble bed under neutron-pulsed operation conditions are summarized. With the improvement of the heat treatment process for blanket module fabrication, a fine-grained microstructure of F82H can be obtained by homogenizing it at 1150 °C followed by normalizing it at 930 °C after the hot isostatic pressing process. Moreover, a promising bonding process for a tungsten armour and an F82H structural material was developed using a solid-state bonding method based on uniaxial hot compression without any artificial compliant layer. As a result of high heat flux tests of F82H first wall mockups, it has been confirmed that a fatigue lifetime correlation, which was developed for the ITER divertor, can be made applicable for the F82H first wall mockup. As for R&D on the breeder material, Li2TiO3, the effect of compression loads on effective thermal conductivity of pebble beds has been clarified for the Li2TiO3 pebble bed. The tritium breeding ratio of a simulated multi-layer blanket structure has successfully been measured using 14 MeV neutrons with an accuracy of 10%. The tritium release rate from the Li2TiO3 pebble has also been successfully measured with pulsed neutron irradiation, which simulates ITER operation.

  7. Markov Chain Ontology Analysis (MCOA)

    PubMed Central

    2012-01-01

    Background Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. Results In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. Conclusion A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches. PMID:22300537

  8. Markov Chain Ontology Analysis (MCOA).

    PubMed

    Frost, H Robert; McCray, Alexa T

    2012-02-03

    Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.

  9. Modelling Risk to US Military Populations from Stopping Blanket Mandatory Polio Vaccination

    PubMed Central

    Burgess, Andrew

    2017-01-01

    Objectives Transmission of polio poses a threat to military forces when deploying to regions where such viruses are endemic. US-born soldiers generally enter service with immunity resulting from childhood immunization against polio; moreover, new recruits are routinely vaccinated with inactivated poliovirus vaccine (IPV), supplemented based upon deployment circumstances. Given residual protection from childhood vaccination, risk-based vaccination may sufficiently protect troops from polio transmission. Methods This analysis employed a mathematical system for polio transmission within military populations interacting with locals in a polio-endemic region to evaluate changes in vaccination policy. Results Removal of blanket immunization had no effect on simulated polio incidence among deployed military populations when risk-based immunization was employed; however, when these individuals reintegrated with their base populations, risk of transmission to nondeployed personnel increased by 19%. In the absence of both blanket- and risk-based immunization, transmission to nondeployed populations increased by 25%. The overall number of new infections among nondeployed populations was negligible for both scenarios due to high childhood immunization rates, partial protection against transmission conferred by IPV, and low global disease incidence levels. Conclusion Risk-based immunization driven by deployment to polio-endemic regions is sufficient to prevent transmission among both deployed and nondeployed US military populations. PMID:29104608

  10. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    NASA Astrophysics Data System (ADS)

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  11. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  12. Object-based change detection method using refined Markov random field

    NASA Astrophysics Data System (ADS)

    Peng, Daifeng; Zhang, Yongjun

    2017-01-01

    In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.

  13. Communication: Introducing prescribed biases in out-of-equilibrium Markov models

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.

    2018-03-01

    Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.

  14. Geodesic Monte Carlo on Embedded Manifolds

    PubMed Central

    Byrne, Simon; Girolami, Mark

    2013-01-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton–Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  15. Projection methods for the numerical solution of Markov chain models

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  16. On the Multilevel Solution Algorithm for Markov Chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham

    1997-01-01

    We discuss the recently introduced multilevel algorithm for the steady-state solution of Markov chains. The method is based on an aggregation principle which is well established in the literature and features a multiplicative coarse-level correction. Recursive application of the aggregation principle, which uses an operator-dependent coarsening, yields a multi-level method which has been shown experimentally to give results significantly faster than the typical methods currently in use. When cast as a multigrid-like method, the algorithm is seen to be a Galerkin-Full Approximation Scheme with a solution-dependent prolongation operator. Special properties of this prolongation lead to the cancellation of the computationally intensive terms of the coarse-level equations.

  17. Comparison of two passive warming devices for prevention of perioperative hypothermia in dogs.

    PubMed

    Potter, J; Murrell, J; MacFarlane, P

    2015-09-01

    To compare effects of two passive warming methods combined with a resistive heating mat on perioperative hypothermia in dogs. Fifty-two dogs were enrolled and randomly allocated to receive a reflective blanket (Blizzard Blanket) or a fabric blanket (VetBed). In addition, in the operating room all dogs were placed onto a table with a resistive heating mat covered with a fabric blanket. Rectal temperature measurements were taken at defined points. Statistical analysis was performed comparing all Blizzard Blanket-treated to all VetBed-treated dogs, and VetBed versus Blizzard Blanket dogs within spay and castrate groups, spay versus castrate groups and within groups less than 10 kg or more than 10 kg bodyweight. Data from 39 dogs were used for analysis. All dogs showed a reduction in perioperative rectal temperature. There were no detected statistical differences between treatments or between the different groups. This study supports previous data on prevalence of hypothermia during surgery. The combination of active and passive warming methods used in this study prevented the development of severe hypothermia, but there were no differences between treatment groups. © 2015 British Small Animal Veterinary Association.

  18. Monitoring Farmland Loss Caused by Urbanization in Beijing from Modis Time Series Using Hierarchical Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.

    2018-04-01

    In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.

  19. The generalization ability of SVM classification based on Markov sampling.

    PubMed

    Xu, Jie; Tang, Yuan Yan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang; Zhang, Baochang

    2015-06-01

    The previously known works studying the generalization ability of support vector machine classification (SVMC) algorithm are usually based on the assumption of independent and identically distributed samples. In this paper, we go far beyond this classical framework by studying the generalization ability of SVMC based on uniformly ergodic Markov chain (u.e.M.c.) samples. We analyze the excess misclassification error of SVMC based on u.e.M.c. samples, and obtain the optimal learning rate of SVMC for u.e.M.c. We also introduce a new Markov sampling algorithm for SVMC to generate u.e.M.c. samples from given dataset, and present the numerical studies on the learning performance of SVMC based on Markov sampling for benchmark datasets. The numerical studies show that the SVMC based on Markov sampling not only has better generalization ability as the number of training samples are bigger, but also the classifiers based on Markov sampling are sparsity when the size of dataset is bigger with regard to the input dimension.

  20. Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen

    2017-10-01

    Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.

  1. Zero velocity interval detection based on a continuous hidden Markov model in micro inertial pedestrian navigation

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Ding, Wei; Yan, Huifang; Duan, Shunli

    2018-06-01

    Shoe-mounted pedestrian navigation systems based on micro inertial sensors rely on zero velocity updates to correct their positioning errors in time, which effectively makes determining the zero velocity interval play a key role during normal walking. However, as walking gaits are complicated, and vary from person to person, it is difficult to detect walking gaits with a fixed threshold method. This paper proposes a pedestrian gait classification method based on a hidden Markov model. Pedestrian gait data are collected with a micro inertial measurement unit installed at the instep. On the basis of analyzing the characteristics of the pedestrian walk, a single direction angular rate gyro output is used to classify gait features. The angular rate data are modeled into a univariate Gaussian mixture model with three components, and a four-state left–right continuous hidden Markov model (CHMM) is designed to classify the normal walking gait. The model parameters are trained and optimized using the Baum–Welch algorithm and then the sliding window Viterbi algorithm is used to decode the gait. Walking data are collected through eight subjects walking along the same route at three different speeds; the leave-one-subject-out cross validation method is conducted to test the model. Experimental results show that the proposed algorithm can accurately detect different walking gaits of zero velocity interval. The location experiment shows that the precision of CHMM-based pedestrian navigation improved by 40% when compared to the angular rate threshold method.

  2. Clinical outcome comparison of immediate blanket treatment versus a delayed pathogen-based treatment protocol for clinical mastitis in a New York dairy herd.

    PubMed

    Vasquez, A K; Nydam, D V; Capel, M B; Eicker, S; Virkler, P D

    2017-04-01

    The purpose was to compare immediate intramammary antimicrobial treatment of all cases of clinical mastitis with a selective treatment protocol based on 24-h culture results. The study was conducted at a 3,500-cow commercial farm in New York. Using a randomized design, mild to moderate clinical mastitis cases were assigned to either the blanket therapy or pathogen-based therapy group. Cows in the blanket therapy group received immediate on-label intramammary treatment with ceftiofur hydrochloride for 5 d. Upon receipt of 24 h culture results, cows in the pathogen-based group followed a protocol automatically assigned via Dairy Comp 305 (Valley Agricultural Software, Tulare, CA): Staphylococcus spp., Streptococcus spp., or Enterococcus spp. were administered on-label intramammary treatment with cephapirin sodium for 1 d. Others, including cows with no-growth or gram-negative results, received no treatment. A total of 725 cases of clinical mastitis were observed; 114 cows were not enrolled due to severity. An additional 122 cases did not meet inclusion criteria. Distribution of treatments for the 489 qualifying events was equal between groups (pathogen-based, n = 246; blanket, n = 243). The proportions of cases assigned to the blanket and pathogen-based groups that received intramammary therapy were 100 and 32%, respectively. No significant differences existed between blanket therapy and pathogen-based therapy in days to clinical cure; means were 4.8 and 4.5 d, respectively. The difference in post-event milk production between groups was not statistically significant (blanket therapy = 34.7 kg; pathogen-based = 35.4 kg). No differences were observed in test-day linear scores between groups; least squares means of linear scores was 4.3 for pathogen-based cows and 4.2 for blanket therapy cows. Odds of survival 30 d postenrollment was similar between groups (odds ratio of pathogen-based = 1.6; 95% confidence interval: 0.7-3.7) as was odds of survival to 60 d (odds ratio = 1.4; 95% confidence interval: 0.7-2.6). The one significant difference found for the effect of treatment was in hospital days; pathogen-based cows experienced, on average, 3 fewer days than blanket therapy cows. A majority (68.5%) of moderate and mild clinical cases would not have been treated if all cows on this trial were enrolled in a pathogen-based protocol. The use of a strategic treatment protocol based on 24-h postmastitis pathogen results has potential to efficiently reduce antimicrobial use. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    NASA Astrophysics Data System (ADS)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  4. Background Adjusted Alignment-Free Dissimilarity Measures Improve the Detection of Horizontal Gene Transfer.

    PubMed

    Tang, Kujin; Lu, Yang Young; Sun, Fengzhu

    2018-01-01

    Horizontal gene transfer (HGT) plays an important role in the evolution of microbial organisms including bacteria. Alignment-free methods based on single genome compositional information have been used to detect HGT. Currently, Manhattan and Euclidean distances based on tetranucleotide frequencies are the most commonly used alignment-free dissimilarity measures to detect HGT. By testing on simulated bacterial sequences and real data sets with known horizontal transferred genomic regions, we found that more advanced alignment-free dissimilarity measures such as CVTree and [Formula: see text] that take into account the background Markov sequences can solve HGT detection problems with significantly improved performance. We also studied the influence of different factors such as evolutionary distance between host and donor sequences, size of sliding window, and host genome composition on the performances of alignment-free methods to detect HGT. Our study showed that alignment-free methods can predict HGT accurately when host and donor genomes are in different order levels. Among all methods, CVTree with word length of 3, [Formula: see text] with word length 3, Markov order 1 and [Formula: see text] with word length 4, Markov order 1 outperform others in terms of their highest F 1 -score and their robustness under the influence of different factors.

  5. [Development of Markov models for economics evaluation of strategies on hepatitis B vaccination and population-based antiviral treatment in China].

    PubMed

    Yang, P C; Zhang, S X; Sun, P P; Cai, Y L; Lin, Y; Zou, Y H

    2017-07-10

    Objective: To construct the Markov models to reflect the reality of prevention and treatment interventions against hepatitis B virus (HBV) infection, simulate the natural history of HBV infection in different age groups and provide evidence for the economics evaluations of hepatitis B vaccination and population-based antiviral treatment in China. Methods: According to the theory and techniques of Markov chain, the Markov models of Chinese HBV epidemic were developed based on the national data and related literature both at home and abroad, including the settings of Markov model states, allowable transitions and initial and transition probabilities. The model construction, operation and verification were conducted by using software TreeAge Pro 2015. Results: Several types of Markov models were constructed to describe the disease progression of HBV infection in neonatal period, perinatal period or adulthood, the progression of chronic hepatitis B after antiviral therapy, hepatitis B prevention and control in adults, chronic hepatitis B antiviral treatment and the natural progression of chronic hepatitis B in general population. The model for the newborn was fundamental which included ten states, i.e . susceptiblity to HBV, HBsAg clearance, immune tolerance, immune clearance, low replication, HBeAg negative CHB, compensated cirrhosis, decompensated cirrhosis, hepatocellular carcinoma (HCC) and death. The susceptible state to HBV was excluded in the perinatal period model, and the immune tolerance state was excluded in the adulthood model. The model for general population only included two states, survive and death. Among the 5 types of models, there were 9 initial states assigned with initial probabilities, and 27 states for transition probabilities. The results of model verifications showed that the probability curves were basically consistent with the situation of HBV epidemic in China. Conclusion: The Markov models developed can be used in economics evaluation of hepatitis B vaccination and treatment for the elimination of HBV infection in China though the structures and parameters in the model have uncertainty with dynamic natures.

  6. The generalization ability of online SVM classification based on Markov sampling.

    PubMed

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  7. Optimal choice of word length when comparing two Markov sequences using a χ 2-statistic.

    PubMed

    Bai, Xin; Tang, Kujin; Ren, Jie; Waterman, Michael; Sun, Fengzhu

    2017-10-03

    Alignment-free sequence comparison using counts of word patterns (grams, k-tuples) has become an active research topic due to the large amount of sequence data from the new sequencing technologies. Genome sequences are frequently modelled by Markov chains and the likelihood ratio test or the corresponding approximate χ 2 -statistic has been suggested to compare two sequences. However, it is not known how to best choose the word length k in such studies. We develop an optimal strategy to choose k by maximizing the statistical power of detecting differences between two sequences. Let the orders of the Markov chains for the two sequences be r 1 and r 2 , respectively. We show through both simulations and theoretical studies that the optimal k= max(r 1 ,r 2 )+1 for both long sequences and next generation sequencing (NGS) read data. The orders of the Markov chains may be unknown and several methods have been developed to estimate the orders of Markov chains based on both long sequences and NGS reads. We study the power loss of the statistics when the estimated orders are used. It is shown that the power loss is minimal for some of the estimators of the orders of Markov chains. Our studies provide guidelines on choosing the optimal word length for the comparison of Markov sequences.

  8. Self-cooled liquid-metal blanket concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malang, S.; Arheidt, K.; Barleon, L.

    1988-11-01

    A blanket concept for the Next European Torus (NET) where 83Pb-17Li serves both as breeder material and as coolant is described. The concept is based on the use of novel flow channel inserts for a decisive reduction of the magnetohydrodynamic (MHD) pressure drop and employs beryllium as neutron multiplier in order to avoid the need for breeding blankets at the inboard side of the torus. This study includes the design, neutronics, thermal hydraulics, stresses, MHDs, corrosion, tritium recovery, and safety of a self-cooled liquid-metal blanket. The results of the investigations indicate that the self-cooled blanket is an attractive alternative tomore » other driver blanket concepts for NET and that it can be extrapolated to the conditions of a DEMO reactor.« less

  9. A computational investigation of the interstitial flow induced by a variably thick blanket of very fine sand covering a coarse sand bed

    NASA Astrophysics Data System (ADS)

    Bartzke, Gerhard; Huhn, Katrin; Bryan, Karin R.

    2017-10-01

    Blanketed sediment beds can have different bed mobility characteristics relative to those of beds composed of uniform grain-size distribution. Most of the processes that affect bed mobility act in the direct vicinity of the bed or even within the bed itself. To simulate the general conditions of analogue experiments, a high-resolution three-dimensional numerical `flume tank' model was developed using a coupled finite difference method flow model and a discrete element method particle model. The method was applied to investigate the physical processes within blanketed sediment beds under the influence of varying flow velocities. Four suites of simulations, in which a matrix of uniform large grains (600 μm) was blanketed by variably thick layers of small particles (80 μm; blanket layer thickness approx. 80, 350, 500 and 700 μm), were carried out. All beds were subjected to five predefined flow velocities ( U 1-5=10-30 cm/s). The fluid profiles, relative particle distances and porosity changes within the bed were determined for each configuration. The data show that, as the thickness of the blanket layer increases, increasingly more small particles accumulate in the indentations between the larger particles closest to the surface. This results in decreased porosity and reduced flow into the bed. In addition, with increasing blanket layer thickness, an increasingly larger number of smaller particles are forced into the pore spaces between the larger particles, causing further reduction in porosity. This ultimately causes the interstitial flow, which would normally allow entrainment of particles in the deeper parts of the bed, to decrease to such an extent that the bed is stabilized.

  10. Face recognition algorithm using extended vector quantization histogram features.

    PubMed

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  11. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    PubMed

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  12. Disinfection of woollen blankets in steam at subatmospheric pressure

    PubMed Central

    Alder, V. G.; Gillespie, W. A.

    1961-01-01

    Blankets may be disinfected in steam at subatmospheric pressures by temperatures below boiling point inside a suitably adapted autoclave chamber. The chamber and its contents are thoroughly evacuated of air so as to allow rapid heat penetration, and steam is admitted to a pressure of 10 in. Hg below atmospheric pressure, which corresponds to a temperature of 89°C. Woollen blankets treated 50 times by this process were undamaged. Vegetative organisms were destroyed but not spores. The method is suitable for large-scale disinfection of blankets and for disinfecting various other articles which would be damaged at higher temperatures. PMID:13860203

  13. Beyond blanket terms: Challenges for the explanatory value of variational (neuro-)ethology. Comment on "Answering Schrödinger's question: A free-energy formulation" by Maxwell James Désormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Bruineberg, Jelle; Hesp, Casper

    2018-03-01

    Ramstead et al. [9] integrate the free-energy principle (FEP) [5] and evolutionary systems theory (EST) [1] in order to develop a "meta-theoretical ontology of life", called 'variational neuro-ethology' (VNE). In drawing upon such abstract notions and integrating them even further, they prove themselves to be the ultimate "hedgehogs" [2]: aiming for the ultimate integration of the life sciences and social sciences under one unifying principle. We endorse this pursuit of theoretical integration, especially when derived from first principles. The fundamental nature of their work is exemplified by the book the authors take as their starting point: Schrödinger's What is Life?[10]. Given the variety and levels of complexity involved in defining "life", providing an answer to this question is challenging. We first briefly comment on VNE as a label and then highlight some possible problems for the kinds of explanations that would follow from VNE. We address the interrelated charges of (1) merely providing Bayesian and evolutionary "just-so" stories [4,6], and (2) limited interpretative clarity when casting "life" as a series of nested Markov blankets. As a pre-emptive response to these critical remarks, we sketch a few ways forward that we find promising.

  14. Adaptive Markov Random Fields for Example-Based Super-resolution of Faces

    NASA Astrophysics Data System (ADS)

    Stephenson, Todd A.; Chen, Tsuhan

    2006-12-01

    Image enhancement of low-resolution images can be done through methods such as interpolation, super-resolution using multiple video frames, and example-based super-resolution. Example-based super-resolution, in particular, is suited to images that have a strong prior (for those frameworks that work on only a single image, it is more like image restoration than traditional, multiframe super-resolution). For example, hallucination and Markov random field (MRF) methods use examples drawn from the same domain as the image being enhanced to determine what the missing high-frequency information is likely to be. We propose to use even stronger prior information by extending MRF-based super-resolution to use adaptive observation and transition functions, that is, to make these functions region-dependent. We show with face images how we can adapt the modeling for each image patch so as to improve the resolution.

  15. Estimation of sojourn time in chronic disease screening without data on interval cases.

    PubMed

    Chen, T H; Kuo, H S; Yen, M F; Lai, M S; Tabar, L; Duffy, S W

    2000-03-01

    Estimation of the sojourn time on the preclinical detectable period in disease screening or transition rates for the natural history of chronic disease usually rely on interval cases (diagnosed between screens). However, to ascertain such cases might be difficult in developing countries due to incomplete registration systems and difficulties in follow-up. To overcome this problem, we propose three Markov models to estimate parameters without using interval cases. A three-state Markov model, a five-state Markov model related to regional lymph node spread, and a five-state Markov model pertaining to tumor size are applied to data on breast cancer screening in female relatives of breast cancer cases in Taiwan. Results based on a three-state Markov model give mean sojourn time (MST) 1.90 (95% CI: 1.18-4.86) years for this high-risk group. Validation of these models on the basis of data on breast cancer screening in the age groups 50-59 and 60-69 years from the Swedish Two-County Trial shows the estimates from a three-state Markov model that does not use interval cases are very close to those from previous Markov models taking interval cancers into account. For the five-state Markov model, a reparameterized procedure using auxiliary information on clinically detected cancers is performed to estimate relevant parameters. A good fit of internal and external validation demonstrates the feasibility of using these models to estimate parameters that have previously required interval cancers. This method can be applied to other screening data in which there are no data on interval cases.

  16. Stochastic modelling of a single ion channel: an alternating renewal approach with application to limited time resolution.

    PubMed

    Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W

    1988-04-22

    Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.

  17. The spectral method and the central limit theorem for general Markov chains

    NASA Astrophysics Data System (ADS)

    Nagaev, S. V.

    2017-12-01

    We consider Markov chains with an arbitrary phase space and develop a modification of the spectral method that enables us to prove the central limit theorem (CLT) for non-uniformly ergodic Markov chains. The conditions imposed on the transition function are more general than those by Athreya-Ney and Nummelin. Our proof of the CLT is purely analytical.

  18. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  19. Assembly, Integration, and Test Methods for Operationally Responsive Space Satellites

    DTIC Science & Technology

    2010-03-01

    like assembly and vibration tests, to ensure there have been no failures induced by the activities. External thermal control blankets and radiator...configuration of the satellite post- vibration test and adds time to the process. • Thermal blanketing is not realistic with current technology or...patterns for thermal blankets and radiator tape. The computer aided drawing (CAD) solid model was used to generate patterns that were cut and applied real

  20. Under-reported data analysis with INAR-hidden Markov chains.

    PubMed

    Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David

    2016-11-20

    In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Evaluations of Silica Aerogel-Based Flexible Blanket as Passive Thermal Control Element for Spacecraft Applications

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammed Adnan; Rashmi, S.; Esther, A. Carmel Mary; Bhavanisankar, Prudhivi Yashwantkumar; Sherikar, Baburao N.; Sridhara, N.; Dey, Arjun

    2018-03-01

    The feasibility of utilizing commercially available silica aerogel-based flexible composite blankets as passive thermal control element in applications such as extraterrestrial environments is investigated. Differential scanning calorimetry showed that aerogel blanket was thermally stable over - 150 to 126 °C. The outgassing behavior, e.g., total mass loss, collected volatile condensable materials, water vapor regained and recovered mass loss, was within acceptable range recommended for the space applications. ASTM tension and tear tests confirmed the material's mechanical integrity. The thermo-optical properties remained nearly unaltered in simulated space environmental tests such as relative humidity, thermal cycling and thermo-vacuum tests and confirmed the space worthiness of the aerogel. Aluminized Kapton stitched or anchored to the blanket could be used to control the optical transparency of the aerogel. These outcomes highlight the potential of commercial aerogel composite blankets as passive thermal control element in spacecraft. Structural and chemical characterization of the material was also done using scanning electron microscopy, Fourier transform infrared spectroscopy and x-ray photoelectron spectroscopy.

  2. Treatment System for Removing Halogenated Compounds from Contaminated Sources

    NASA Technical Reports Server (NTRS)

    Clausen, Christian A. (Inventor); Yestrebsky, Cherie L. (Inventor); Quinn, Jacqueline W. (Inventor)

    2015-01-01

    A treatment system and a method for removal of at least one halogenated compound, such as PCBs, found in contaminated systems are provided. The treatment system includes a polymer blanket for receiving at least one non-polar solvent. The halogenated compound permeates into or through a wall of the polymer blanket where it is solubilized with at least one non-polar solvent received by said polymer blanket forming a halogenated solvent mixture. This treatment system and method provides for the in situ removal of halogenated compounds from the contaminated system. In one embodiment, the halogenated solvent mixture is subjected to subsequent processes which destroy and/or degrade the halogenated compound.

  3. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  4. Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression

    PubMed Central

    Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.

    2016-01-01

    The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571

  5. Active classifier selection for RGB-D object categorization using a Markov random field ensemble method

    NASA Astrophysics Data System (ADS)

    Durner, Maximilian; Márton, Zoltán.; Hillenbrand, Ulrich; Ali, Haider; Kleinsteuber, Martin

    2017-03-01

    In this work, a new ensemble method for the task of category recognition in different environments is presented. The focus is on service robotic perception in an open environment, where the robot's task is to recognize previously unseen objects of predefined categories, based on training on a public dataset. We propose an ensemble learning approach to be able to flexibly combine complementary sources of information (different state-of-the-art descriptors computed on color and depth images), based on a Markov Random Field (MRF). By exploiting its specific characteristics, the MRF ensemble method can also be executed as a Dynamic Classifier Selection (DCS) system. In the experiments, the committee- and topology-dependent performance boost of our ensemble is shown. Despite reduced computational costs and using less information, our strategy performs on the same level as common ensemble approaches. Finally, the impact of large differences between datasets is analyzed.

  6. Weighted blankets and sleep in autistic children--a randomized controlled trial.

    PubMed

    Gringras, Paul; Green, Dido; Wright, Barry; Rush, Carla; Sparrowhawk, Masako; Pratt, Karen; Allgar, Victoria; Hooke, Naomi; Moore, Danielle; Zaiwalla, Zenobia; Wiggs, Luci

    2014-08-01

    To assess the effectiveness of a weighted-blanket intervention in treating severe sleep problems in children with autism spectrum disorder (ASD). This phase III trial was a randomized, placebo-controlled crossover design. Participants were aged between 5 years and 16 years 10 months, with a confirmed ASD diagnosis and severe sleep problems, refractory to community-based interventions. The interventions were either a commercially available weighted blanket or otherwise identical usual weight blanket (control), introduced at bedtime; each was used for a 2-week period before crossover to the other blanket. Primary outcome was total sleep time (TST) recorded by actigraphy over each 2-week period. Secondary outcomes included actigraphically recorded sleep-onset latency, sleep efficiency, assessments of child behavior, family functioning, and adverse events. Sleep was also measured by using parent-report diaries. Seventy-three children were randomized and analysis conducted on 67 children who completed the study. Using objective measures, the weighted blanket, compared with the control blanket, did not increase TST as measured by actigraphy and adjusted for baseline TST. There were no group differences in any other objective or subjective measure of sleep, including behavioral outcomes. On subjective preference measures, parents and children favored the weighted blanket. The use of a weighted blanket did not help children with ASD sleep for a longer period of time, fall asleep significantly faster, or wake less often. However, the weighted blanket was favored by children and parents, and blankets were well tolerated over this period. Copyright © 2014 by the American Academy of Pediatrics.

  7. Unsupervised change detection of multispectral images based on spatial constraint chi-squared transform and Markov random field model

    NASA Astrophysics Data System (ADS)

    Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli

    2016-10-01

    Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.

  8. Beryllium R&D for blanket application

    NASA Astrophysics Data System (ADS)

    Donne, M. Dalle; Longhurst, G. R.; Kawamura, H.; Scaffidi-Argentina, F.

    1998-10-01

    The paper describes the main problems and the R&D for the beryllium to be used as neutron multiplier in blankets. As the four ITER partners propose to use beryllium in the form of pebbles for their DEMO relevant blankets (only the Russians consider the porous beryllium option as an alternative) and the ITER breeding blanket will use beryllium pebbles as well, the paper is mainly based on beryllium pebbles. Also the work on the chemical reactivity of fully dense and porous beryllium in contact with water steam is described, due to the safety importance of this point.

  9. Saccade selection when reward probability is dynamically manipulated using Markov chains

    PubMed Central

    Lovejoy, Lee P.; Krauzlis, Richard J.

    2012-01-01

    Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200–600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection. PMID:18330552

  10. Saccade selection when reward probability is dynamically manipulated using Markov chains.

    PubMed

    Nummela, Samuel U; Lovejoy, Lee P; Krauzlis, Richard J

    2008-05-01

    Markov chains (stochastic processes where probabilities are assigned based on the previous outcome) are commonly used to examine the transitions between behavioral states, such as those that occur during foraging or social interactions. However, relatively little is known about how well primates can incorporate knowledge about Markov chains into their behavior. Saccadic eye movements are an example of a simple behavior influenced by information about probability, and thus are good candidates for testing whether subjects can learn Markov chains. In addition, when investigating the influence of probability on saccade target selection, the use of Markov chains could provide an alternative method that avoids confounds present in other task designs. To investigate these possibilities, we evaluated human behavior on a task in which stimulus reward probabilities were assigned using a Markov chain. On each trial, the subject selected one of four identical stimuli by saccade; after selection, feedback indicated the rewarded stimulus. Each session consisted of 200-600 trials, and on some sessions, the reward magnitude varied. On sessions with a uniform reward, subjects (n = 6) learned to select stimuli at a frequency close to reward probability, which is similar to human behavior on matching or probability classification tasks. When informed that a Markov chain assigned reward probabilities, subjects (n = 3) learned to select the greatest reward probability more often, bringing them close to behavior that maximizes reward. On sessions where reward magnitude varied across stimuli, subjects (n = 6) demonstrated preferences for both greater reward probability and greater reward magnitude, resulting in a preference for greater expected value (the product of reward probability and magnitude). These results demonstrate that Markov chains can be used to dynamically assign probabilities that are rapidly exploited by human subjects during saccade target selection.

  11. Design optimization of first wall and breeder unit module size for the Indian HCCB blanket module

    NASA Astrophysics Data System (ADS)

    Deepak, SHARMA; Paritosh, CHAUDHURI

    2018-04-01

    The Indian test blanket module (TBM) program in ITER is one of the major steps in the Indian fusion reactor program for carrying out the R&D activities in the critical areas like design of tritium breeding blankets relevant to future Indian fusion devices (ITER relevant and DEMO). The Indian Lead–Lithium Cooled Ceramic Breeder (LLCB) blanket concept is one of the Indian DEMO relevant TBM, to be tested in ITER as a part of the TBM program. Helium-Cooled Ceramic Breeder (HCCB) is an alternative blanket concept that consists of lithium titanate (Li2TiO3) as ceramic breeder (CB) material in the form of packed pebble beds and beryllium as the neutron multiplier. Specifically, attentions are given to the optimization of first wall coolant channel design and size of breeder unit module considering coolant pressure and thermal loads for the proposed Indian HCCB blanket based on ITER relevant TBM and loading conditions. These analyses will help proceeding further in designing blankets for loads relevant to the future fusion device.

  12. Atomic oxygen undercutting of defects on SiO2 protected polyimide solar array blankets

    NASA Technical Reports Server (NTRS)

    Banks, Bruce A.; Rutledge, Sharon K.; Auer, Bruce M.; Difilippo, Frank

    1990-01-01

    Low Earth Orbital (LEO) atomic oxygen can oxidize SiO2-protected polyimide kapton solar array blanket material which is not totally protected as a result of pinholes or scratches in the SiO2 coatings. The probability of atomic oxygen reaction upon initial impact is low, thus inviting oxidation by secondary impacts. The secondary impacts can produce atomic oxygen undercutting which may lead to coating mechanical failure and ever increasing mass loss rates of kapton. Comparison of undercutting effects in isotropic plasma asher and directed beam tests are reported. These experimental results are compared with computational undercutting profiles based on Monte Carlo methods and their implication on LEO performance of protected polymers.

  13. Annular seed-blanket thorium fuel core concepts for heavy water moderated reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromley, B.P.; Hyland, B.

    2013-07-01

    New reactor concepts to implement thorium-based fuel cycles have been explored to achieve maximum resource utilization. Pressure tube heavy water reactors (PT-HWR) are highly advantageous for implementing the use of thorium-based fuels because of their high neutron economy and on-line re-fuelling capability. The use of heterogeneous seed-blanket core concepts in a PT-HWR where higher-fissile-content seed fuel bundles are physically separate from lower-fissile-content blanket bundles allows more flexibility and control in fuel management to maximize the fissile utilization and conversion of fertile fuel. The lattice concept chosen is a 35-element bundle made with a homogeneous mixture of reactor grade Pu andmore » Th, and with a central zirconia rod to help reduce coolant void reactivity. Several annular heterogeneous seed-blanket core concepts with plutonium-thorium-based fuels in a 700-MWe-class PT-HWR were analyzed, using a once-through thorium (OTT) cycle. Different combinations of seed and blanket fuel were tested to determine the impact on core-average burnup, fissile utilization, power distributions, and other performance parameters. It was found that the various core concepts can achieve a fissile utilization that is up to 30% higher than is currently achieved in a PT-HWR using conventional natural uranium fuel bundles. Up to 67% of the Pu is consumed; up to 43% of the energy is produced from thorium, and up to 363 kg/year of U-233 is produced. Seed-blanket cores with ∼50% content of low-power blanket bundles may require power de-rating (∼58% to 65%) to avoid exceeding maximum limits for peak channel power, bundle power and linear element ratings. (authors)« less

  14. High temperature lined conduits, elbows and tees

    DOEpatents

    De Feo, Angelo; Drewniany, Edward

    1982-01-01

    A high temperature lined conduit comprising, a liner, a flexible insulating refractory blanket around and in contact with the liner, a pipe member around the blanket and spaced therefrom, and castable rigid refractory material between the pipe member and the blanket. Anchors are connected to the inside diameter of the pipe and extend into the castable material. The liner includes male and female slip joint ends for permitting thermal expansion of the liner with respect to the castable material and the pipe member. Elbows and tees of the lined conduit comprise an elbow liner wrapped with insulating refractory blanket material around which is disposed a spaced elbow pipe member with castable refractory material between the blanket material and the elbow pipe member. A reinforcing band is connected to the elbow liner at an intermediate location thereon from which extend a plurality of hollow tubes or pins which extend into the castable material to anchor the lined elbow and permit thermal expansion. A method of fabricating the high temperature lined conduit, elbows and tees is also disclosed which utilizes a polyethylene layer over the refractory blanket after it has been compressed to maintain the refractory blanket in a compressed condition until the castable material is in place. Hot gases are then directed through the interior of the liner for evaporating the polyethylene and setting the castable material which permits the compressed blanket to come into close contact with the castable material.

  15. An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR

    NASA Astrophysics Data System (ADS)

    Yu, Guanying; Liu, Xufeng; Liu, Songlin

    2016-10-01

    The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)

  16. Increase in transmission loss of a double panel system by addition of mass inclusions to a poro-elastic layer: A comparison between theory and experiment

    NASA Astrophysics Data System (ADS)

    Idrisi, Kamal; Johnson, Marty E.; Toso, Alessandro; Carneal, James P.

    2009-06-01

    This paper is concerned with the modeling and optimization of heterogeneous (HG) blankets, which are used in this investigation to reduce the sound transmission through double panel systems. HG blankets consist of poro-elastic media with small embedded masses, which act similarly to a distributed mass-spring-damper-system. HG blankets have shown significant potential to reduce low frequency radiated sound from structures, where traditional poro-elastic materials have little effect. A mathematical model of a double panel system with an acoustic cavity and HG blanket was developed using impedance and mobility methods. The predicted responses of the source and the receiving panel due to a point force are validated with experimental measurements. The presented results indicate that proper tuning of the HG blankets can result in broadband noise reduction below 500 Hz with less than 10% added mass.

  17. Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.

    PubMed

    Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka

    2014-02-01

    In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain.

  18. Passive synchronization for Markov jump genetic oscillator networks with time-varying delays.

    PubMed

    Lu, Li; He, Bing; Man, Chuntao; Wang, Shun

    2015-04-01

    In this paper, the synchronization problem of coupled Markov jump genetic oscillator networks with time-varying delays and external disturbances is investigated. By introducing the drive-response concept, a novel mode-dependent control scheme is proposed, which guarantees that the synchronization can be achieved. By applying the Lyapunov-Krasovskii functional method and stochastic analysis, sufficient conditions are established based on passivity theory in terms of linear matrix inequalities. A numerical example is provided to demonstrate the effectiveness of our theoretical results. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Applications of geostatistics and Markov models for logo recognition

    NASA Astrophysics Data System (ADS)

    Pham, Tuan

    2003-01-01

    Spatial covariances based on geostatistics are extracted as representative features of logo or trademark images. These spatial covariances are different from other statistical features for image analysis in that the structural information of an image is independent of the pixel locations and represented in terms of spatial series. We then design a classifier in the sense of hidden Markov models to make use of these geostatistical sequential data to recognize the logos. High recognition rates are obtained from testing the method against a public-domain logo database.

  20. MHD work related to a self-cooled Pb-17Li blanket with poloidal-radial-toroidal ducts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimann, J.; Barleon, L.; Buehler, L.

    1994-12-31

    For self cooled liquid metal blankets MHD pressure drop and velocity distributions are considered as critical issues. This paper summarizes MHD work performed for a DEMO-relevant Pb-17Li blanket which uses essential characteristics of a previous ANL design: The coolant flows downwards in the rear poloidal ducts, turns by 180{degrees} at the blanket bottom and is distributed from the ascending poloidal ducts into short radial channels which feed the toroidal First Wall coolant ducts (aligned with the main magnetic field direction). The flow through the subsequent radial channels is collected again in poloidal channels and the coolant leaves the blanket segmentmore » at the top. The blanket design is based on the use of flow channel inserts (FCIs) (which means electrically thin conducting walls for MHD) for all ducts except for the toroidal FW coolant channels. MHD related issues were defined and estimations of corresponding pressure drops were performed. Previous experimental work included a proof of principle of FCIs and a detailed experiment with a single {open_quotes}poloidal{sm_bullet}toroidal{sm_bullet}poloidal{close_quotes} duct (cooperation with ANL). In parallel, a numerical code based on the Core Flow Approximation (CFA) was developed to predict pressure drop and velocity distributions for arbitrary single duct geometries.« less

  1. Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images

    NASA Astrophysics Data System (ADS)

    Ardila, Juan P.; Tolpekin, Valentyn A.; Bijker, Wietske; Stein, Alfred

    2011-11-01

    Identification of tree crowns from remote sensing requires detailed spectral information and submeter spatial resolution imagery. Traditional pixel-based classification techniques do not fully exploit the spatial and spectral characteristics of remote sensing datasets. We propose a contextual and probabilistic method for detection of tree crowns in urban areas using a Markov random field based super resolution mapping (SRM) approach in very high resolution images. Our method defines an objective energy function in terms of the conditional probabilities of panchromatic and multispectral images and it locally optimizes the labeling of tree crown pixels. Energy and model parameter values are estimated from multiple implementations of SRM in tuning areas and the method is applied in QuickBird images to produce a 0.6 m tree crown map in a city of The Netherlands. The SRM output shows an identification rate of 66% and commission and omission errors in small trees and shrub areas. The method outperforms tree crown identification results obtained with maximum likelihood, support vector machines and SRM at nominal resolution (2.4 m) approaches.

  2. Direct LiT Electrolysis in a Metallic Fusion Blanket

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Luke

    2016-09-30

    A process that simplifies the extraction of tritium from molten lithium-based breeding blankets was developed. The process is based on the direct electrolysis of lithium tritide using a ceramic Li ion conductor that replaces the molten salt extraction step. Extraction of tritium in the form of lithium tritide in the blankets/targets of fusion/fission reactors is critical in order to maintain low concentrations. This is needed to decrease the potential tritium permeation to the surroundings and large releases from unforeseen accident scenarios. Extraction is complicated due to required low tritium concentration limits and because of the high affinity of tritium formore » the blanket. This work identified, developed and tested the use of ceramic lithium ion conductors capable of recovering hydrogen and deuterium through an electrolysis step at high temperatures.« less

  3. Direct Lit Electrolysis In A Metallic Lithium Fusion Blanket

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colon-Mercado, H.; Babineau, D.; Elvington, M.

    2015-10-13

    A process that simplifies the extraction of tritium from molten lithium based breeding blankets was developed.  The process is based on the direct electrolysis of lithium tritide using a ceramic Li ion conductor that replaces the molten salt extraction step. Extraction of tritium in the form of lithium tritide in the blankets/targets of fission/fusion reactors is critical in order to maintained low concentrations.  This is needed to decrease the potential tritium permeation to the surroundings and large releases from unforeseen accident scenarios. Because of the high affinity of tritium for the blanket, extraction is complicated at the required low levels. This workmore » identified, developed and tested the use of ceramic lithium ion conductors capable of recovering the hydrogen and deuterium thru an electrolysis step at high temperatures. « less

  4. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    NASA Technical Reports Server (NTRS)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  5. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  6. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  7. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  8. Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.

    PubMed

    Dixit, Purushottam D; Dill, Ken A

    2018-02-13

    Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.

  9. Preliminary Design of a Helium-Cooled Ceramic Breeder Blanket for CFETR Based on the BIT Concept

    NASA Astrophysics Data System (ADS)

    Ma, Xuebin; Liu, Songlin; Li, Jia; Pu, Yong; Chen, Xiangcun

    2014-04-01

    CFETR is the “ITER-like” China fusion engineering test reactor. The design of the breeding blanket is one of the key issues in achieving the required tritium breeding radio for the self-sufficiency of tritium as a fuel. As one option, a BIT (breeder insider tube) type helium cooled ceramic breeder blanket (HCCB) was designed. This paper presents the design of the BIT—HCCB blanket configuration inside a reactor and its structure, along with neutronics, thermo-hydraulics and thermal stress analyses. Such preliminary performance analyses indicate that the design satisfies the requirements and the material allowable limits.

  10. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  11. Design of an arc-free thermal blanket

    NASA Technical Reports Server (NTRS)

    Fellas, C. N.

    1981-01-01

    The success of a multilayer thermal blanket in eliminating arcing is discussed. Arcing is eliminated by limiting the surface potential to well below the threshold level for discharge. This is achieved by enhancing the leakage current which results in conduction of the excess charge to the spacecraft structure. The thermal blanket consists of several layers of thermal control (space approved) materials, bonded together, with Kapton on the outside, arranged in such a way that when the outer surface is charged by electron irradiation, a strong electric field is set up on the Kapton layer resulting in a greatly improved conductivity. The basic properties of matter utilized in designing this blanket method of charge removal, and optimum thermo-optical properties are summarized.

  12. 77 FR 25999 - PGPV, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-02

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER12-1603-000] PGPV, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding of PGPV, LLC's application for...

  13. Predicted and observed directional dependence of meteoroid/debris impacts on LDEF thermal blankets

    NASA Astrophysics Data System (ADS)

    Drolshagen, Gerhard

    1992-06-01

    The number of impacts from meteoroids and space debris particles to the various Long Duration Exposure Facility (LDEF) rows is calculated using ESABASE/DEBRIS, a 3-D numerical analysis tool. It is based on the latest environment flux models and includes geometrical and directional effects. A detailed comparison of model predictions and actual observations is made for impacts on the thermal blankets which covered the USCR experiment. Impact features on these blankets were studied intensively in European laboratories and hypervelocity impacts for calibration were performed. The thermal blankets were located on all LDEF rows, except 3, 9, and 12. Because of their uniform composition and thickness, these blankets allow a direct analysis of the directional dependence of impacts and provide a unique test case for the latest meteoroid and debris flux models.

  14. Study of multilayer thermal insulation by inverse problems method

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Nenarokomov, A. V.; Gonzalez, V. M.

    2009-11-01

    The purpose of this paper is to introduce a new method in the research of radiative and thermal properties of materials with further applications in the design of thermal control systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the TCS for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the inverse heat transfer problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the inverse heat conduction problem is presented as well. The practical approves were made for specimen of the real MLI.

  15. Building Simple Hidden Markov Models. Classroom Notes

    ERIC Educational Resources Information Center

    Ching, Wai-Ki; Ng, Michael K.

    2004-01-01

    Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.

  16. DNA motif alignment by evolving a population of Markov chains.

    PubMed

    Bi, Chengpeng

    2009-01-30

    Deciphering cis-regulatory elements or de novo motif-finding in genomes still remains elusive although much algorithmic effort has been expended. The Markov chain Monte Carlo (MCMC) method such as Gibbs motif samplers has been widely employed to solve the de novo motif-finding problem through sequence local alignment. Nonetheless, the MCMC-based motif samplers still suffer from local maxima like EM. Therefore, as a prerequisite for finding good local alignments, these motif algorithms are often independently run a multitude of times, but without information exchange between different chains. Hence it would be worth a new algorithm design enabling such information exchange. This paper presents a novel motif-finding algorithm by evolving a population of Markov chains with information exchange (PMC), each of which is initialized as a random alignment and run by the Metropolis-Hastings sampler (MHS). It is progressively updated through a series of local alignments stochastically sampled. Explicitly, the PMC motif algorithm performs stochastic sampling as specified by a population-based proposal distribution rather than individual ones, and adaptively evolves the population as a whole towards a global maximum. The alignment information exchange is accomplished by taking advantage of the pooled motif site distributions. A distinct method for running multiple independent Markov chains (IMC) without information exchange, or dubbed as the IMC motif algorithm, is also devised to compare with its PMC counterpart. Experimental studies demonstrate that the performance could be improved if pooled information were used to run a population of motif samplers. The new PMC algorithm was able to improve the convergence and outperformed other popular algorithms tested using simulated and biological motif sequences.

  17. Management of horses with focus on blanketing and clipping practices reported by members of the Swedish and Norwegian equestrian community.

    PubMed

    Hartmann, E; Bøe, K E; Jørgensen, G H M; Mejdell, C M; Dahlborn, K

    2017-03-01

    Limited information is available on the extent to which blankets are used on horses and the owners' reasoning behind clipping the horse's coat. Research on the effects of those practices on horse welfare is scarce but results indicate that blanketing and clipping may not be necessary from the horse's perspective and can interfere with the horse's thermoregulatory capacities. Therefore, this survey collected robust, quantitative data on the housing routines and management of horses with focus on blanketing and clipping practices as reported by members of the Swedish and Norwegian equestrian community. Horse owners were approached via an online survey, which was distributed to equestrian organizations and social media. Data from 4,122 Swedish and 2,075 Norwegian respondents were collected, of which 91 and 84% of respondents, respectively, reported using blankets on horses during turnout. Almost all respondents owning warmblood riding horses used blankets outdoors (97% in Sweden and 96% in Norway) whereas owners with Icelandic horses and coldblood riding horses used blankets significantly less ( < 0.05). Blankets were mainly used during rainy, cold, or windy weather conditions and in ambient temperatures of 10°C and below. The horse's coat was clipped by 67% of respondents in Sweden and 35% of Norwegian respondents whereby owners with warmblood horses and horses primarily used for dressage and competition reported clipping the coat most frequently. In contrast to scientific results indicating that recovery time after exercise increases with blankets and that clipped horses have a greater heat loss capacity, only around 50% of respondents agreed to these statements. This indicates that evidence-based information on all aspects of blanketing and clipping has not yet been widely distributed in practice. More research is encouraged, specifically looking at the effect of blankets on sweaty horses being turned out after intense physical exercise and the effect of blankets on social interactions such as mutual grooming. Future efforts should be tailored to disseminate knowledge more efficiently, which can ultimately stimulate thoughtful decision-making by horse owners concerning the use of blankets and clipping the horse's coat.

  18. A Markov game theoretic data fusion approach for cyber situational awareness

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Cruz, Jose B., Jr.; Haynes, Leonard; Kruger, Martin; Blasch, Erik

    2007-04-01

    This paper proposes an innovative data-fusion/ data-mining game theoretic situation awareness and impact assessment approach for cyber network defense. Alerts generated by Intrusion Detection Sensors (IDSs) or Intrusion Prevention Sensors (IPSs) are fed into the data refinement (Level 0) and object assessment (L1) data fusion components. High-level situation/threat assessment (L2/L3) data fusion based on Markov game model and Hierarchical Entity Aggregation (HEA) are proposed to refine the primitive prediction generated by adaptive feature/pattern recognition and capture new unknown features. A Markov (Stochastic) game method is used to estimate the belief of each possible cyber attack pattern. Game theory captures the nature of cyber conflicts: determination of the attacking-force strategies is tightly coupled to determination of the defense-force strategies and vice versa. Also, Markov game theory deals with uncertainty and incompleteness of available information. A software tool is developed to demonstrate the performance of the high level information fusion for cyber network defense situation and a simulation example shows the enhanced understating of cyber-network defense.

  19. Refining value-at-risk estimates using a Bayesian Markov-switching GJR-GARCH copula-EVT model.

    PubMed

    Sampid, Marius Galabe; Hasim, Haslifah M; Dai, Hongsheng

    2018-01-01

    In this paper, we propose a model for forecasting Value-at-Risk (VaR) using a Bayesian Markov-switching GJR-GARCH(1,1) model with skewed Student's-t innovation, copula functions and extreme value theory. A Bayesian Markov-switching GJR-GARCH(1,1) model that identifies non-constant volatility over time and allows the GARCH parameters to vary over time following a Markov process, is combined with copula functions and EVT to formulate the Bayesian Markov-switching GJR-GARCH(1,1) copula-EVT VaR model, which is then used to forecast the level of risk on financial asset returns. We further propose a new method for threshold selection in EVT analysis, which we term the hybrid method. Empirical and back-testing results show that the proposed VaR models capture VaR reasonably well in periods of calm and in periods of crisis.

  20. A comparison between Gauss-Newton and Markov chain Monte Carlo basedmethods for inverting spectral induced polarization data for Cole-Coleparameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.

    2008-05-15

    We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less

  1. Heating performances of a IC in-blanket ring array

    NASA Astrophysics Data System (ADS)

    Bosia, G.; Ragona, R.

    2015-12-01

    An important limiting factor to the use of ICRF as candidate heating method in a commercial reactor is due to the evanescence of the fast wave in vacuum and in most of the SOL layer, imposing proximity of the launching structure to the plasma boundary and causing, at the highest power level, high RF standing and DC rectified voltages at the plasma periphery, with frequent voltage breakdowns and enhanced local wall loading. In a previous work [1] the concept for an Ion Cyclotron Heating & Current Drive array (and using a different wave guide technology, a Lower Hybrid array) based on the use of periodic ring structure, integrated in the reactor blanket first wall and operating at high input power and low power density, was introduced. Based on the above concept, the heating performance of such array operating on a commercial fusion reactor is estimated.

  2. Aerogel Blanket Insulation Materials for Cryogenic Applications

    NASA Technical Reports Server (NTRS)

    Coffman, B. E.; Fesmire, J. E.; White, S.; Gould, G.; Augustynowicz, S.

    2009-01-01

    Aerogel blanket materials for use in thermal insulation systems are now commercially available and implemented by industry. Prototype aerogel blanket materials were presented at the Cryogenic Engineering Conference in 1997 and by 2004 had progressed to full commercial production by Aspen Aerogels. Today, this new technology material is providing superior energy efficiencies and enabling new design approaches for more cost effective cryogenic systems. Aerogel processing technology and methods are continuing to improve, offering a tailor-able array of product formulations for many different thermal and environmental requirements. Many different varieties and combinations of aerogel blankets have been characterized using insulation test cryostats at the Cryogenics Test Laboratory of NASA Kennedy Space Center. Detailed thermal conductivity data for a select group of materials are presented for engineering use. Heat transfer evaluations for the entire vacuum pressure range, including ambient conditions, are given. Examples of current cryogenic applications of aerogel blanket insulation are also given. KEYWORDS: Cryogenic tanks, thermal insulation, composite materials, aerogel, thermal conductivity, liquid nitrogen boil-off

  3. Acoustic contributions of a sound absorbing blanket placed in a double panel structure: absorption versus transmission.

    PubMed

    Doutres, Olivier; Atalla, Noureddine

    2010-08-01

    The objective of this paper is to propose a simple tool to estimate the absorption vs. transmission loss contributions of a multilayered blanket unbounded in a double panel structure and thus guide its optimization. The normal incidence airborne sound transmission loss of the double panel structure, without structure-borne connections, is written in terms of three main contributions; (i) sound transmission loss of the panels, (ii) sound transmission loss of the blanket and (iii) sound absorption due to multiple reflections inside the cavity. The method is applied to four different blankets frequently used in automotive and aeronautic applications: a non-symmetric multilayer made of a screen in sandwich between two porous layers and three symmetric porous layers having different pore geometries. It is shown that the absorption behavior of the blanket controls the acoustic behavior of the treatment at low and medium frequencies and its transmission loss at high frequencies. Acoustic treatment having poor sound absorption behavior can affect the performance of the double panel structure.

  4. Predicted and observed directional dependence of meteoroid/debris impacts on LDEF thermal blankets

    NASA Technical Reports Server (NTRS)

    Drolshagen, Gerhard

    1993-01-01

    The number of impacts from meteoroids and space debris particles to the various LDEF rows is calculated using ESABASE/DEBRIS, a 3-D numerical analysis tool. It is based on recent reference environment flux models and includes geometrical and directional effects. A comparison of model predictions and actual observations is made for penetrations of the thermal blankets which covered the UHCR experiment. The thermal blankets were located on all LDEF rows, except 3, 9, and 12. Because of their uniform composition and thickness, these blankets allow a direct analysis of the directional dependence of impacts and provide a test case for the latest meteoroid and debris flux models.

  5. An active target for the accelerator-based transmutation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grebyonkin, K.F.

    1995-10-01

    Consideration is given to the possibility of radical reduction in power requirements to the proton accelerator of the electronuclear reactor due to neutron multiplication both in the blanket and the target of an active material. The target is supposed to have the fast-neutron spectrum, and the blanket-the thermal one. The blanket and the target are separated by the thermal neutrons absorber, which is responsible for the neutron decoupling of the active target and blanket. Also made are preliminary estimations which illustrate that the realization of the idea under consideration can lead to significant reduction in power requirements to the protonmore » beam and, hence considerably improve economic characteristics of the electronuclear reactor.« less

  6. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    ERIC Educational Resources Information Center

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  7. Quantifying the fate of agricultural nitrogen in an unconfined aquifer: Stream-based observations at three measurement scales

    NASA Astrophysics Data System (ADS)

    Gilmore, Troy E.; Genereux, David P.; Solomon, D. Kip; Solder, John E.; Kimball, Briant A.; Mitasova, Helena; Birgand, François

    2016-03-01

    We compared three stream-based sampling methods to study the fate of nitrate in groundwater in a coastal plain watershed: point measurements beneath the streambed, seepage blankets (novel seepage-meter design), and reach mass-balance. The methods gave similar mean groundwater seepage rates into the stream (0.3-0.6 m/d) during two 3-4 day field campaigns despite an order of magnitude difference in stream discharge between the campaigns. At low flow, estimates of flow-weighted mean nitrate concentrations in groundwater discharge ([NO3-]FWM) and nitrate flux from groundwater to the stream decreased with increasing degree of channel influence and measurement scale, i.e., [NO3-]FWM was 654, 561, and 451 µM for point, blanket, and reach mass-balance sampling, respectively. At high flow the trend was reversed, likely because reach mass-balance captured inputs from shallow transient high-nitrate flow paths while point and blanket measurements did not. Point sampling may be better suited to estimating aquifer discharge of nitrate, while reach mass-balance reflects full nitrate inputs into the channel (which at high flow may be more than aquifer discharge due to transient flow paths, and at low flow may be less than aquifer discharge due to channel-based nitrate removal). Modeling dissolved N2 from streambed samples suggested (1) about half of groundwater nitrate was denitrified prior to discharge from the aquifer, and (2) both extent of denitrification and initial nitrate concentration in groundwater (700-1300 µM) were related to land use, suggesting these forms of streambed sampling for groundwater can reveal watershed spatial relations relevant to nitrate contamination and fate in the aquifer.

  8. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  9. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  10. MCMC genome rearrangement.

    PubMed

    Miklós, István

    2003-10-01

    As more and more genomes have been sequenced, genomic data is rapidly accumulating. Genome-wide mutations are believed more neutral than local mutations such as substitutions, insertions and deletions, therefore phylogenetic investigations based on inversions, transpositions and inverted transpositions are less biased by the hypothesis on neutral evolution. Although efficient algorithms exist for obtaining the inversion distance of two signed permutations, there is no reliable algorithm when both inversions and transpositions are considered. Moreover, different type of mutations happen with different rates, and it is not clear how to weight them in a distance based approach. We introduce a Markov Chain Monte Carlo method to genome rearrangement based on a stochastic model of evolution, which can estimate the number of different evolutionary events needed to sort a signed permutation. The performance of the method was tested on simulated data, and the estimated numbers of different types of mutations were reliable. Human and Drosophila mitochondrial data were also analysed with the new method. The mixing time of the Markov Chain is short both in terms of CPU times and number of proposals. The source code in C is available on request from the author.

  11. Adaptive hidden Markov model-based online learning framework for bearing faulty detection and performance degradation monitoring

    NASA Astrophysics Data System (ADS)

    Yu, Jianbo

    2017-01-01

    This study proposes an adaptive-learning-based method for machine faulty detection and health degradation monitoring. The kernel of the proposed method is an "evolving" model that uses an unsupervised online learning scheme, in which an adaptive hidden Markov model (AHMM) is used for online learning the dynamic health changes of machines in their full life. A statistical index is developed for recognizing the new health states in the machines. Those new health states are then described online by adding of new hidden states in AHMM. Furthermore, the health degradations in machines are quantified online by an AHMM-based health index (HI) that measures the similarity between two density distributions that describe the historic and current health states, respectively. When necessary, the proposed method characterizes the distinct operating modes of the machine and can learn online both abrupt as well as gradual health changes. Our method overcomes some drawbacks of the HIs (e.g., relatively low comprehensibility and applicability) based on fixed monitoring models constructed in the offline phase. Results from its application in a bearing life test reveal that the proposed method is effective in online detection and adaptive assessment of machine health degradation. This study provides a useful guide for developing a condition-based maintenance (CBM) system that uses an online learning method without considerable human intervention.

  12. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    PubMed

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  13. Quantum Graphical Models and Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leifer, M.S.; Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo Ont., N2L 2Y5; Poulin, D.

    Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markovmore » Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersley-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.« less

  14. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  15. Markov models of genome segmentation

    NASA Astrophysics Data System (ADS)

    Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram

    2007-01-01

    We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.

  16. A Bayesian network model for predicting aquatic toxicity mode ...

    EPA Pesticide Factsheets

    The mode of toxic action (MoA) has been recognized as a key determinant of chemical toxicity, but development of predictive MoA classification models in aquatic toxicology has been limited. We developed a Bayesian network model to classify aquatic toxicity MoA using a recently published dataset containing over one thousand chemicals with MoA assignments for aquatic animal toxicity. Two dimensional theoretical chemical descriptors were generated for each chemical using the Toxicity Estimation Software Tool. The model was developed through augmented Markov blanket discovery from the dataset of 1098 chemicals with the MoA broad classifications as a target node. From cross validation, the overall precision for the model was 80.2%. The best precision was for the AChEI MoA (93.5%) where 257 chemicals out of 275 were correctly classified. Model precision was poorest for the reactivity MoA (48.5%) where 48 out of 99 reactive chemicals were correctly classified. Narcosis represented the largest class within the MoA dataset and had a precision and reliability of 80.0%, reflecting the global precision across all of the MoAs. False negatives for narcosis most often fell into electron transport inhibition, neurotoxicity or reactivity MoAs. False negatives for all other MoAs were most often narcosis. A probabilistic sensitivity analysis was undertaken for each MoA to examine the sensitivity to individual and multiple descriptor findings. The results show that the Markov blank

  17. Feynman-Kac formula for stochastic hybrid systems.

    PubMed

    Bressloff, Paul C

    2017-01-01

    We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.

  18. Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks

    PubMed Central

    Gao, Shouwan; Chen, Pengpeng; Huang, Dan; Niu, Qiang

    2016-01-01

    This paper studies the remote Kalman filtering problem for a distributed system setting with multiple sensors that are located at different physical locations. Each sensor encapsulates its own measurement data into one single packet and transmits the packet to the remote filter via a lossy distinct channel. For each communication channel, a time-homogeneous Markov chain is used to model the normal operating condition of packet delivery and losses. Based on the Markov model, a necessary and sufficient condition is obtained, which can guarantee the stability of the mean estimation error covariance. Especially, the stability condition is explicitly expressed as a simple inequality whose parameters are the spectral radius of the system state matrix and transition probabilities of the Markov chains. In contrast to the existing related results, our method imposes less restrictive conditions on systems. Finally, the results are illustrated by simulation examples. PMID:27104541

  19. Markov and non-Markov processes in complex systems by the dynamical information entropy

    NASA Astrophysics Data System (ADS)

    Yulmetyev, R. M.; Gafarov, F. M.

    1999-12-01

    We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.

  20. Neutronics Evaluation of Lithium-Based Ternary Alloys in IFE Blankets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jolodosky, A.; Fratoni, M.

    Lithium is often the preferred choice as breeder and coolant in fusion blankets as it offers excellent heat transfer and corrosion properties, and most importantly, it has a very high tritium solubility and results in very low levels of tritium permeation throughout the facility infrastructure. However, lithium metal vigorously reacts with air and water and exacerbates plant safety concerns. For this reason, over the years numerous blanket concepts have been proposed with the scope of reducing concerns associated with lithium. The European helium cooled pebble bed breeding blanket (HCPB) physically confines lithium within ceramic pebbles. The pebbles reside within amore » low activation martensitic ferritic steel structure and are cooled by helium. The blanket is composed of the tritium breeding lithium ceramic pebbles and neutron multiplying beryllium pebbles. Other blanket designs utilize lead to lower chemical reactivity; LiPb alone can serve as a breeder, coolant, neutron multiplier, and tritium carrier. Blankets employing LiPb coolants alongside silicon carbide structural components can achieve high plant efficiency, low afterheat, and low operation pressures. This alloy can also be used alongside of helium such as in the dual-coolant lead-lithium concept (DCLL); helium is utilized to cool the first wall and structural components made up of low-activation ferritic steel, whereas lithium-lead (LiPb) acts as a self-cooled breeder in the inner channels of the blanket. The helium-cooled steel and lead-lithium alloy are separated by flow channel inserts (usually made out of silicon carbide) which thermally insulate the self-cooled breeder region from the helium cooled steel walls. This creates a LiPb breeder with a much higher exit temperature than the steel which increases the power cycle efficiency and also lowers the magnetohydrodynamic (MHD) pressure drop [6]. Molten salt blankets with a mixture of lithium, beryllium, and fluorides (FLiBe) offer good tritium breeding, low electrical conductivity and therefore low MHD pressure drop, low chemical reactivity, and extremely low tritium inventory; the addition of sodium (FLiNaBe) has been considered because it retains the properties of FliBe but also lowers the melting point. Although many of these blanket concepts are promising, challenges still remain. The limited amount of beryllium available poses a problem for ceramic breeders such as the HCPB. FLiBe and FLiNaBe are highly viscous and have a low thermal conductivity. Lithium lead possesses a poor thermal conductivity which can cause problems in both DCLL and LiPb blankets. Additionally, the tritium permeation from these two blankets into plant components can be a problem and must be reduced. Consequently, Lawrence Livermore National Laboratory (LLNL) is attempting to develop a lithium-based alloy—most likely a ternary alloy—which maintains the beneficial properties of lithium (e.g. high tritium breeding and solubility) while reducing overall flammability concerns for use in the blanket of an inertial fusion energy (IFE) power plant. The LLNL concept employs inertial confinement fusion (ICF) through the use of lasers aimed at an indirect-driven target composed of deuterium-tritium fuel. The fusion driver/target design implements the same physics currently experimented at the National Ignition Facility (NIF). The plant uses lithium in both the primary coolant and blanket; therefore, lithium-related hazards are of primary concern. Although reducing chemical reactivity is the primary motivation for the development of new lithium alloys, the successful candidates will have to guarantee acceptable performance in all their functions. The scope of this study is to evaluate the neutronics performance of a large number of lithium-based alloys in the blanket of the IFE engine and assess their properties upon activation. This manuscript is organized as follows: Section 12 presents the models and methodologies used for the analysis; Section 3 discusses the results; Section 4 summarizes findings and future work.« less

  1. Method and system to directly produce electrical power within the lithium blanket region of a magnetically confined, deuterium-tritium (DT) fueled, thermonuclear fusion reactor

    DOEpatents

    Woolley, Robert D.

    1999-01-01

    A method for integrating liquid metal magnetohydrodynamic power generation with fusion blanket technology to produce electrical power from a thermonuclear fusion reactor located within a confining magnetic field and within a toroidal structure. A hot liquid metal flows from a liquid metal blanket region into a pump duct of an electromagnetic pump which moves the liquid metal to a mixer where a gas of predetermined pressure is mixed with the pressurized liquid metal to form a Froth mixture. Electrical power is generated by flowing the Froth mixture between electrodes in a generator duct. When the Froth mixture exits the generator the gas is separated from the liquid metal and both are recycled.

  2. Hidden markov model for the prediction of transmembrane proteins using MATLAB.

    PubMed

    Chaturvedi, Navaneet; Shanker, Sudhanshu; Singh, Vinay Kumar; Sinha, Dhiraj; Pandey, Paras Nath

    2011-01-01

    Since membranous proteins play a key role in drug targeting therefore transmembrane proteins prediction is active and challenging area of biological sciences. Location based prediction of transmembrane proteins are significant for functional annotation of protein sequences. Hidden markov model based method was widely applied for transmembrane topology prediction. Here we have presented a revised and a better understanding model than an existing one for transmembrane protein prediction. Scripting on MATLAB was built and compiled for parameter estimation of model and applied this model on amino acid sequence to know the transmembrane and its adjacent locations. Estimated model of transmembrane topology was based on TMHMM model architecture. Only 7 super states are defined in the given dataset, which were converted to 96 states on the basis of their length in sequence. Accuracy of the prediction of model was observed about 74 %, is a good enough in the area of transmembrane topology prediction. Therefore we have concluded the hidden markov model plays crucial role in transmembrane helices prediction on MATLAB platform and it could also be useful for drug discovery strategy. The database is available for free at bioinfonavneet@gmail.comvinaysingh@bhu.ac.in.

  3. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    NASA Astrophysics Data System (ADS)

    Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-04-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.

  4. Tissue multifractality and hidden Markov model based integrated framework for optimum precancer detection

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sabyasachi; Das, Nandan K.; Kurmi, Indrajit; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2017-10-01

    We report the application of a hidden Markov model (HMM) on multifractal tissue optical properties derived via the Born approximation-based inverse light scattering method for effective discrimination of precancerous human cervical tissue sites from the normal ones. Two global fractal parameters, generalized Hurst exponent and the corresponding singularity spectrum width, computed by multifractal detrended fluctuation analysis (MFDFA), are used here as potential biomarkers. We develop a methodology that makes use of these multifractal parameters by integrating with different statistical classifiers like the HMM and support vector machine (SVM). It is shown that the MFDFA-HMM integrated model achieves significantly better discrimination between normal and different grades of cancer as compared to the MFDFA-SVM integrated model.

  5. Monte Carlo estimation of total variation distance of Markov chains on large spaces, with application to phylogenetics.

    PubMed

    Herbei, Radu; Kubatko, Laura

    2013-03-26

    Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.

  6. VAMPnets for deep learning of molecular kinetics.

    PubMed

    Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank

    2018-01-02

    There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.

  7. Incorporating interaction networks into the determination of functionally related hit genes in genomic experiments with Markov random fields

    PubMed Central

    Robinson, Sean; Nevalainen, Jaakko; Pinna, Guillaume; Campalans, Anna; Radicella, J. Pablo; Guyon, Laurent

    2017-01-01

    Abstract Motivation: Incorporating gene interaction data into the identification of ‘hit’ genes in genomic experiments is a well-established approach leveraging the ‘guilt by association’ assumption to obtain a network based hit list of functionally related genes. We aim to develop a method to allow for multivariate gene scores and multiple hit labels in order to extend the analysis of genomic screening data within such an approach. Results: We propose a Markov random field-based method to achieve our aim and show that the particular advantages of our method compared with those currently used lead to new insights in previously analysed data as well as for our own motivating data. Our method additionally achieves the best performance in an independent simulation experiment. The real data applications we consider comprise of a survival analysis and differential expression experiment and a cell-based RNA interference functional screen. Availability and implementation: We provide all of the data and code related to the results in the paper. Contact: sean.j.robinson@utu.fi or laurent.guyon@cea.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881978

  8. Random Breakage of a Rod into Unit Lengths

    ERIC Educational Resources Information Center

    Gani, Joe; Swift, Randall

    2011-01-01

    In this article we consider the random breakage of a rod into "L" unit elements and present a Markov chain based method that tracks intermediate breakage configurations. The probability of the time to final breakage for L = 3, 4, 5 is obtained and the method is shown to extend in principle, beyond L = 5.

  9. A Markov chain model for studying suicide dynamics: an illustration of the Rose theorem

    PubMed Central

    2014-01-01

    Background High-risk strategies would only have a modest effect on suicide prevention within a population. It is best to incorporate both high-risk and population-based strategies to prevent suicide. This study aims to compare the effectiveness of suicide prevention between high-risk and population-based strategies. Methods A Markov chain illness and death model is proposed to determine suicide dynamic in a population and examine its effectiveness for reducing the number of suicides by modifying certain parameters of the model. Assuming a population with replacement, the suicide risk of the population was estimated by determining the final state of the Markov model. Results The model shows that targeting the whole population for suicide prevention is more effective than reducing risk in the high-risk tail of the distribution of psychological distress (i.e. the mentally ill). Conclusions The results of this model reinforce the essence of the Rose theorem that lowering the suicidal risk in the population at large may be more effective than reducing the high risk in a small population. PMID:24948330

  10. DNA Base-Calling from a Nanopore Using a Viterbi Algorithm

    PubMed Central

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-01-01

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (∼98%), even with a poor signal/noise ratio. PMID:22677395

  11. Checkerboard seed-blanket thorium fuel core concepts for heavy water moderated reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromley, B.P.; Hyland, B.

    2013-07-01

    New reactor concepts to implement thorium-based fuel cycles have been explored to achieve maximum resource utilization. Pressure tube heavy water reactors (PT-HWR) are highly advantageous for implementing the use of thorium-based fuels because of their high neutron economy and on-line re-fuelling capability. The use of heterogeneous seed-blanket core concepts in a PT-HWR where higher-fissile-content seed fuel bundles are physically separate from lower-fissile-content blanket bundles allows more flexibility and control in fuel management to maximize the fissile utilization and conversion of fertile fuel. The lattice concept chosen was a 35-element bundle made with a homogeneous mixture of reactor grade Pu (aboutmore » 67 wt% fissile) and Th, and with a central zirconia rod to help reduce coolant void reactivity. Several checkerboard heterogeneous seed-blanket core concepts with plutonium-thorium-based fuels in a 700-MWe-class PT-HWR were analyzed, using a once-through thorium (OTT) cycle. Different combinations of seed and blanket fuel were tested to determine the impact on core-average burnup, fissile utilization, power distributions, and other performance parameters. It was found that various checkerboard core concepts can achieve a fissile utilization that is up to 26% higher than that achieved in a PT-HWR using more conventional natural uranium fuel bundles. Up to 60% of the Pu is consumed; up to 43% of the energy is produced from thorium, and up to 303 kg/year of Pa-233/U-233/U-235 are produced. Checkerboard cores with about 50% of low-power blanket bundles may require power de-rating (65% to 74%) to avoid exceeding maximum limits for channel and bundle powers and linear element ratings. (authors)« less

  12. A mathematical approach for evaluating Markov models in continuous time without discrete-event simulation.

    PubMed

    van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F

    2013-08-01

    Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.

  13. The detection of financial crisis using combination of volatility and markov switching models based on real output, domestic credit per GDP, and ICI indicators

    NASA Astrophysics Data System (ADS)

    Sugiyanto; Zukhronah, Etik; Setianingrum, Meganisa

    2018-05-01

    Open economic system has not only provided ease for every country to interact with each other, but also make it easier to transmitted the crisis. Financial crisis that hit Indonesia in 1997-1998 and 2008 severely impacted the economy, thus a method to detect crisis is required. According to Kamisky et al. [6], crisis can be detected based on several financial indicators such as real output, domestic credit per Gross Domestic Product (GDP), and Indonesia Composite Index (ICI). This research aims to determine the appropriate combination of volatility and Markov switching model to detect financial crisis in Indonesia based on the indicators. Volatility model used for modeling the unconstant-variance of ARMA. Markov switching is an alternative model of time series data with changed conditions in the data, or called state. In this research, we are using three assumption of states namely low volatility state, medium volatility state and high volatility state. The data of each indicator were taken from 1990 until 2016. The result of the study show that MS-ARCH(3,1) can be used to detect the financial crisis that hit Indonesia in 1997-1998 and 2008 based on real output, domestic credit per GDP, and ICI indicators.

  14. Quantum Enhanced Inference in Markov Logic Networks

    NASA Astrophysics Data System (ADS)

    Wittek, Peter; Gogolin, Christian

    2017-04-01

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  15. Quantum Enhanced Inference in Markov Logic Networks.

    PubMed

    Wittek, Peter; Gogolin, Christian

    2017-04-19

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  16. Quantum Enhanced Inference in Markov Logic Networks

    PubMed Central

    Wittek, Peter; Gogolin, Christian

    2017-01-01

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning. PMID:28422093

  17. Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)

    NASA Technical Reports Server (NTRS)

    Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV

    1988-01-01

    The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.

  18. Bayesian selection of Markov models for symbol sequences: application to microsaccadic eye movements.

    PubMed

    Bettenbühl, Mario; Rusconi, Marco; Engbert, Ralf; Holschneider, Matthias

    2012-01-01

    Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.

  19. Correlated gamma-based hidden Markov model for the smart asthma management based on rescue inhaler usage.

    PubMed

    Son, Junbo; Brennan, Patricia Flatley; Zhou, Shiyu

    2017-05-10

    Asthma is a very common chronic disease that affects a large portion of population in many nations. Driven by the fast development in sensor and mobile communication technology, a smart asthma management system has become available to continuously monitor the key health indicators of asthma patients. Such data provides opportunities for healthcare practitioners to examine patients not only in the clinic (on-site) but also outside of the clinic (off-site) in their daily life. In this paper, taking advantage from this data availability, we propose a correlated gamma-based hidden Markov model framework, which can reveal and highlight useful information from the rescue inhaler-usage profiles of individual patients for practitioners. The proposed method can provide diagnostic information about the asthma control status of individual patients and can help practitioners to make more informed therapeutic decisions accordingly. The proposed method is validated through both numerical study and case study based on real world data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Predicting Loss-of-Control Boundaries Toward a Piloting Aid

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.

  1. Preventing hypothermia: comparison of current devices used by the US Army in an in vitro warmed fluid model.

    PubMed

    Allen, Paul B; Salyer, Steven W; Dubick, Michael A; Holcomb, John B; Blackbourne, Lorne H

    2010-07-01

    The purpose of this study was to develop an in vitro torso model constructed with fluid bags and to determine whether this model could be used to differentiate between the heat prevention performance of devices with active chemical or radiant forced-air heating systems compared with passive heat loss prevention devices. We tested three active (Hypothermia Prevention Management Kit [HPMK], Ready-Heat, and Bair Hugger) and five passive (wool, space blankets, Blizzard blankets, human remains pouch, and Hot Pocket) hypothermia prevention products. Active warming devices included products with chemically or electrically heated systems. Both groups were tested on a fluid model warmed to 37 degrees C versus a control with no warming device. Core temperatures were recorded every 5 minutes for 120 minutes in total. Products that prevent heat loss with an actively heated element performed better than most passive prevention methods. The original HPMK achieved and maintained significantly higher temperatures than all other methods and the controls at 120 minutes (p < 0.05). None of the devices with an actively heated element achieved the sustained 44 degrees C that could damage human tissue if left in place for 6 hours. The best passive methods of heat loss prevention were the Hot Pocket and Blizzard blanket, which performed the same as two of the three active heating methods tested at 120 minutes. Our in vitro fluid bag "torso" model seemed sensitive to detect heat loss in the evaluation of several active or passive warming devices. All active and most passive devices were better than wool blankets. Under conditions near room temperature, passive warming methods (Blizzard blanket or the Hot Pocket) were as effective as active warming devices other than the original HPMK. Further studies are necessary to determine how these data can translate to field conditions in preventing heat loss in combat casualties.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Three solid-breeder water-cooled blanket concepts have been developed for ITER based on a multilayer configuration. The primary difference among the concepts is in the fabricated form of breeder and multiplier. All the concepts have beryllium for neutron multiplication and solid-breeder temperature control. The blanket design does not use helium gaps or insulator material to control the solid breeder temperature. Lithium oxide (Li{sub 2}O) and lithium zirconate (Li{sub 2}ZrO{sub 3}) are the primary and the backup breeder materials, respectively. The lithium-6 enrichment is 95%. The use of high lithium-6 enrichment reduces the solid breeder volume required in the blanket and consequentlymore » the total tritium inventory in the solid breeder material. Also, it increases the blanket capability to accommodate power variation. The multilayer blanket configuration can accommodate up to a factor of two change in the neutron wall loading without violating the different design guidelines. The blanket material forms are sintered products and packed bed of small pebbles. The first concept has a sintered product material (blocks) for both the beryllium multiplier and the solid breeder. The second concept, the common ITER blanket, uses a packed bed breeder and beryllium blocks. The last concept is similar to the first except for the first and the last beryllium zones. Two small layers of beryllium pebbles are located behind the first wall and the back of the last beryllium zone to reduce the total inventory of the beryllium material and to improve the blanket performance. The design philosophy adopted for the blanket is to produce the necessary tritium required for the ITER operation and to operate at power reactor conditions as much as possible. Also, the reliability and the safety aspects of the blanket are enhanced by using low-pressure water coolant and the separation of the tritium purge flow from the coolant system by several barriers.« less

  3. Using the Pearson Distribution for Synthesis of the Suboptimal Algorithms for Filtering Multi-Dimensional Markov Processes

    NASA Astrophysics Data System (ADS)

    Mit'kin, A. S.; Pogorelov, V. A.; Chub, E. G.

    2015-08-01

    We consider the method of constructing the suboptimal filter on the basis of approximating the a posteriori probability density of the multidimensional Markov process by the Pearson distributions. The proposed method can efficiently be used for approximating asymmetric, excessive, and finite densities.

  4. APT Blanket System Loss-of-Coolant Accident (LOCA) Based on Initial Conceptual Design - Case 2: with Beam Shutdown Only

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamm, L.L.

    1998-10-07

    This report is one of a series of reports that document normal operation and accident simulations for the Accelerator Production of Tritium (APT) blanket heat removal system. These simulations were performed for the Preliminary Safety Analysis Report. This report documents the results of simulations of a Loss-of-Flow Accident (LOFA) where power is lost to all of the pumps that circulate water in the blanket region, the accelerator beam is shut off and neither the residual heat removal nor cavity flood systems operate.

  5. Impact-acoustics inspection of tile-wall bonding integrity via wavelet transform and hidden Markov models

    NASA Astrophysics Data System (ADS)

    Luk, B. L.; Liu, K. P.; Tong, F.; Man, K. F.

    2010-05-01

    The impact-acoustics method utilizes different information contained in the acoustic signals generated by tapping a structure with a small metal object. It offers a convenient and cost-efficient way to inspect the tile-wall bonding integrity. However, the existence of the surface irregularities will cause abnormal multiple bounces in the practical inspection implementations. The spectral characteristics from those bounces can easily be confused with the signals obtained from different bonding qualities. As a result, it will deteriorate the classic feature-based classification methods based on frequency domain. Another crucial difficulty posed by the implementation is the additive noise existing in the practical environments that may also cause feature mismatch and false judgment. In order to solve this problem, the work described in this paper aims to develop a robust inspection method that applies model-based strategy, and utilizes the wavelet domain features with hidden Markov modeling. It derives a bonding integrity recognition approach with enhanced immunity to surface roughness as well as the environmental noise. With the help of the specially designed artificial sample slabs, experiments have been carried out with impact acoustic signals contaminated by real environmental noises acquired under practical inspection background. The results are compared with those using classic method to demonstrate the effectiveness of the proposed method.

  6. Progress on DCLL Blanket Concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Clement; Abdou, M.; Katoh, Yutai

    2013-09-01

    Under the US Fusion Nuclear Science and Technology Development program, we have selected the Dual Coolant Lead Lithium concept (DCLL) as a reference blanket, which has the potential to be a high performance DEMO blanket design with a projected thermal efficiency of >40%. Reduced activation ferritic/martensitic (RAF/M) steel is used as the structural material. The self-cooled breeder PbLi is circulated for power conversion and for tritium breeding. A SiC-based flow channel insert (FCI) is used as a means for magnetohydrodynamic pressure drop reduction from the circulating liquid PbLi and as a thermal insulator to separate the high-temperature PbLi (~700°C) frommore » the helium-cooled RAF/M steel structure. We are making progress on related R&D needs to address critical Fusion Nuclear Science and Facility (FNSF) and DEMO blanket development issues. When performing the function as the Interface Coordinator for the DCLL blanket concept, we had been developing the mechanical design and performing neutronics, structural and thermal hydraulics analyses of the DCLL TBM module. We had estimated the necessary ancillary equipment that will be needed at the ITER site and a detailed safety impact report has been prepared. This provided additional understanding of the DCLL blanket concept in preparation for the FNSF and DEMO. This paper will be a summary report on the progress of the DCLL TBM design and R&Ds for the DCLL blanket concept.« less

  7. Patchwork sampling of stochastic differential equations

    NASA Astrophysics Data System (ADS)

    Kürsten, Rüdiger; Behn, Ulrich

    2016-03-01

    We propose a method to sample stationary properties of solutions of stochastic differential equations, which is accurate and efficient if there are rarely visited regions or rare transitions between distinct regions of the state space. The method is based on a complete, nonoverlapping partition of the state space into patches on which the stochastic process is ergodic. On each of these patches we run simulations of the process strictly truncated to the corresponding patch, which allows effective simulations also in rarely visited regions. The correct weight for each patch is obtained by counting the attempted transitions between all different patches. The results are patchworked to cover the whole state space. We extend the concept of truncated Markov chains which is originally formulated for processes which obey detailed balance to processes not fulfilling detailed balance. The method is illustrated by three examples, describing the one-dimensional diffusion of an overdamped particle in a double-well potential, a system of many globally coupled overdamped particles in double-well potentials subject to additive Gaussian white noise, and the overdamped motion of a particle on the circle in a periodic potential subject to a deterministic drift and additive noise. In an appendix we explain how other well-known Markov chain Monte Carlo algorithms can be related to truncated Markov chains.

  8. Detection of protein complex from protein-protein interaction network using Markov clustering

    NASA Astrophysics Data System (ADS)

    Ochieng, P. J.; Kusuma, W. A.; Haryanto, T.

    2017-05-01

    Detection of complexes, or groups of functionally related proteins, is an important challenge while analysing biological networks. However, existing algorithms to identify protein complexes are insufficient when applied to dense networks of experimentally derived interaction data. Therefore, we introduced a graph clustering method based on Markov clustering algorithm to identify protein complex within highly interconnected protein-protein interaction networks. Protein-protein interaction network was first constructed to develop geometrical network, the network was then partitioned using Markov clustering to detect protein complexes. The interest of the proposed method was illustrated by its application to Human Proteins associated to type II diabetes mellitus. Flow simulation of MCL algorithm was initially performed and topological properties of the resultant network were analysed for detection of the protein complex. The results indicated the proposed method successfully detect an overall of 34 complexes with 11 complexes consisting of overlapping modules and 20 non-overlapping modules. The major complex consisted of 102 proteins and 521 interactions with cluster modularity and density of 0.745 and 0.101 respectively. The comparison analysis revealed MCL out perform AP, MCODE and SCPS algorithms with high clustering coefficient (0.751) network density and modularity index (0.630). This demonstrated MCL was the most reliable and efficient graph clustering algorithm for detection of protein complexes from PPI networks.

  9. Passive Acoustic Leak Detection for Sodium Cooled Fast Reactors Using Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Marklund, A. Riber; Kishore, S.; Prakash, V.; Rajan, K. K.; Michel, F.

    2016-06-01

    Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970s and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), the proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control.

  10. A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2015-10-01

    In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e. , internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature.

  11. A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI

    PubMed Central

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2015-01-01

    In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e., internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature. PMID:27054199

  12. A fast exact simulation method for a class of Markov jump processes.

    PubMed

    Li, Yao; Hu, Lili

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.

  13. A passively-safe fusion reactor blanket with helium coolant and steel structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crosswait, Kenneth Mitchell

    1994-04-01

    Helium is attractive for use as a fusion blanket coolant for a number of reasons. It is neutronically and chemically inert, nonmagnetic, and will not change phase during any off-normal or accident condition. A significant disadvantage of helium, however, is its low density and volumetric heat capacity. This disadvantage manifests itself most clearly during undercooling accident conditions such as a loss of coolant accident (LOCA) or a loss of flow accident (LOFA). This thesis describes a new helium-cooled tritium breeding blanket concept which performs significantly better during such accidents than current designs. The proposed blanket uses reduced-activation ferritic steel asmore » a structural material and is designed for neutron wall loads exceeding 4 MW/m{sup 2}. The proposed geometry is based on the nested-shell concept developed by Wong, but some novel features are used to reduce the severity of the first wall temperature excursion. These features include the following: (1) A ``beryllium-joint`` concept is introduced, which allows solid beryllium slabs to be used as a thermal conduction path from the first wall to the cooler portions of the blanket. The joint concept allows for significant swelling of the beryllium (10 percent or more) without developing large stresses in the blanket structure. (2) Natural circulation of the coolant in the water-cooled shield is used to maintain shield temperatures below 100 degrees C, thus maintaining a heat sink close to the blanket during the accident. This ensures the long-term passive safety of the blanket.« less

  14. Status and future transition of rapid urbanizing landscape in central Western Ghats - CA based approach

    NASA Astrophysics Data System (ADS)

    Bharath, S..; Rajan, K. S.; Ramachandra, T. V.

    2014-11-01

    The land use changes in forested landscape are highly complex and dynamic, affected by the natural, socio-economic, cultural, political and other factors. The remote sensing (RS) and geographical information system (GIS) techniques coupled with multi-criteria evaluation functions such as Markov-cellular automata (CA-Markov) model helps in analysing intensity, extent and future forecasting of human activities affecting the terrestrial biosphere. Karwar taluk of Central Western Ghats in Karnataka state, India has seen rapid transitions in its forest cover due to various anthropogenic activities, primarily driven by major industrial activities. A study based on Landsat and IRS derived data along with CA-Markov method has helped in characterizing the patterns and trends of land use changes over a period of 2004-2013, expected transitions was predicted for a set of scenarios through 2013-2022. The analysis reveals the loss of pristine forest cover from 75.51% to 67.36% (1973 to 2013) and increase in agriculture land as well as built-up area of 8.65% (2013), causing impact on local flora and fauna. The other factors driving these changes are the aggregated level of demand for land, local and regional effects of land use activities such as deforestation, improper practices in expansion of agriculture and infrastructure development, deteriorating natural resources availability. The spatio temporal models helped in visualizing on-going changes apart from prediction of likely changes. The CA-Markov based analysis provides us insights into the localized changes impacting these regions and can be useful in developing appropriate mitigation management approaches based on the modelled future impacts. This necessitates immediate measures for minimizing the future impacts.

  15. A General and Flexible Approach to Estimating the Social Relations Model Using Bayesian Methods

    ERIC Educational Resources Information Center

    Ludtke, Oliver; Robitzsch, Alexander; Kenny, David A.; Trautwein, Ulrich

    2013-01-01

    The social relations model (SRM) is a conceptual, methodological, and analytical approach that is widely used to examine dyadic behaviors and interpersonal perception within groups. This article introduces a general and flexible approach to estimating the parameters of the SRM that is based on Bayesian methods using Markov chain Monte Carlo…

  16. An Alignment-Free Algorithm in Comparing the Similarity of Protein Sequences Based on Pseudo-Markov Transition Probabilities among Amino Acids

    PubMed Central

    Li, Yushuang; Yang, Jiasheng; Zhang, Yi

    2016-01-01

    In this paper, we have proposed a novel alignment-free method for comparing the similarity of protein sequences. We first encode a protein sequence into a 440 dimensional feature vector consisting of a 400 dimensional Pseudo-Markov transition probability vector among the 20 amino acids, a 20 dimensional content ratio vector, and a 20 dimensional position ratio vector of the amino acids in the sequence. By evaluating the Euclidean distances among the representing vectors, we compare the similarity of protein sequences. We then apply this method into the ND5 dataset consisting of the ND5 protein sequences of 9 species, and the F10 and G11 datasets representing two of the xylanases containing glycoside hydrolase families, i.e., families 10 and 11. As a result, our method achieves a correlation coefficient of 0.962 with the canonical protein sequence aligner ClustalW in the ND5 dataset, much higher than those of other 5 popular alignment-free methods. In addition, we successfully separate the xylanases sequences in the F10 family and the G11 family and illustrate that the F10 family is more heat stable than the G11 family, consistent with a few previous studies. Moreover, we prove mathematically an identity equation involving the Pseudo-Markov transition probability vector and the amino acids content ratio vector. PMID:27918587

  17. Line-blanketed model stellar atmospheres applied to Sirius. Ph.D. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.

    1972-01-01

    The primary goal of this analysis is to determine whether the effects of atomic bound-bound transitions on stellar atmospheric structure can be represented well in models. The investigation is based on an approach which is called the method of artificial absorption edges. The method is described, developed, tested, and applied to the problem of fitting a model stellar atmosphere to Sirius. It is shown that the main features of the entire observed spectrum of Sirius can be reproduced to within the observational uncertainty by a blanketed flux-constant model with T sub eff = 9700 K and Log g = 4.26. The profile of H sub gamma is reproduced completely within the standard deviations of the measurements except near line center, where non-LTE effects are expected to be significant. The equivalent width of H sub gamma, the Paschen slope, the Balmer jump, and the absolute flux at 5550 A all agree with the observed values.

  18. A simplified parsimonious higher order multivariate Markov chain model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  19. Multilocus Association Mapping Using Variable-Length Markov Chains

    PubMed Central

    Browning, Sharon R.

    2006-01-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests. PMID:16685642

  20. Multilocus association mapping using variable-length Markov chains.

    PubMed

    Browning, Sharon R

    2006-06-01

    I propose a new method for association-based gene mapping that makes powerful use of multilocus data, is computationally efficient, and is straightforward to apply over large genomic regions. The approach is based on the fitting of variable-length Markov chain models, which automatically adapt to the degree of linkage disequilibrium (LD) between markers to create a parsimonious model for the LD structure. Edges of the fitted graph are tested for association with trait status. This approach can be thought of as haplotype testing with sophisticated windowing that accounts for extent of LD to reduce degrees of freedom and number of tests while maximizing information. I present analyses of two published data sets that show that this approach can have better power than single-marker tests or sliding-window haplotypic tests.

  1. A tridiagonal parsimonious higher order multivariate Markov chain model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a tridiagonal parsimonious higher-order multivariate Markov chain model (TPHOMMCM). Moreover, estimation method of the parameters in TPHOMMCM is give. Numerical experiments illustrate the effectiveness of TPHOMMCM.

  2. LPTA Versus Tradeoff: How Procurement Methods Can Impact Contract Performance

    DTIC Science & Technology

    2015-06-01

    and Technology BBP Better Buying Power BPA Blanket Purchase Agreement CAR Contract Action Report COR Contracting Officer’s...Blanket Purchase Agreements ( BPAs ), which utilize streamlined contracting in the form of orders to award requirements faster. Under IDIQs, GSA vehicles...and BPA agreements the rates are typically pre-negotiated with the set of vendors, leaving little necessity for negotiation and tradeoff tactics

  3. Detection method of financial crisis in Indonesia using MSGARCH models based on banking condition indicators

    NASA Astrophysics Data System (ADS)

    Sugiyanto; Zukhronah, E.; Sari, S. P.

    2018-05-01

    Financial crisis has hit Indonesia for several times resulting the needs for an early detection system to minimize the impact. One of many methods that can be used to detect the crisis is to model the crisis indicators using combination of volatility and Markov switching models [5]. There are some indicators that can be used to detect financial crisis. Three of them are the difference between interest rate on deposit and lending, the real interest rate on deposit, and the difference between real BI rate and real Fed rate which can be referred as banking condition indicators. Volatility model used to overcome the conditional variance that change over time. Combination of volatility and Markov switching models used to detect condition change on the data. The smoothed probability from the combined models can be used to detect the crisis. This research resulted that the best combined volatility and Markov switching models for the three indicators are MS-GARCH(3,1,1) models with three states assumption. Crises in mid of 1997 until 1998 has successfully detected with a certain range of smoothed probability value for the three indicators.

  4. DNA base-calling from a nanopore using a Viterbi algorithm.

    PubMed

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-05-16

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (~98%), even with a poor signal/noise ratio. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. Two examples of industrial applications of shock physics research

    NASA Astrophysics Data System (ADS)

    Sanai, Mohsen

    1996-05-01

    An in-depth understanding of shock physics phenomena has led to many industrial applications. Two recent applications discussed in this paper are a method for assessing explosion safety in industrial plants and a bomb-resistant luggage container for widebody aircraft. Our explosion safety assessment is based on frequent use of computer simulation of postulated accidents to model in detail the detonation of energetic materials, the formation and propagation of the resulting airblast, and the projection of fragments of known material and mass. Using a general load-damage analysis technique referred to as the pressure-impulse (PI) method, we have developed a PC-based computer algorithm that includes a continually expanding library of PI load and damage curves, which can predict and graphically display common structural damage modes and the response of humans to postulated explosion accidents. A second commercial application of shock physics discussed here is a bomb-resistant luggage container for widebody aircraft that can protect the aircraft from a terrorist bomb hidden inside the luggage. This hardened luggage container (HLC) relies on blast management and debris containment provided by a flexible flow-through blanket woven from threads made with a strong lightweight material, such as Spectra or Kevlar. This mitigation blanket forms a continuous and seamless shell around the sides of the luggage container that are parallel to the aircraft axis, leaving the two ends of the container unprotected. When an explosion occurs, the mitigation blanket expands into a nearly circular shell that contains the flying debris while directing the flow into the adjacent containers. The HLC concept has been demonstrated through full-scale experiments conducted at SRI. We believe that these two examples represent a broad class of potential industrial hazard applications of the experimental, analytical, and computational tools possessed by the shock physics community.

  6. Parameter Identification Of Multilayer Thermal Insulation By Inverse Problems

    NASA Astrophysics Data System (ADS)

    Nenarokomov, Aleksey V.; Alifanov, Oleg M.; Gonzalez, Vivaldo M.

    2012-07-01

    The purpose of this paper is to introduce an iterative regularization method in the research of radiative and thermal properties of materials with further applications in the design of Thermal Control Systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (heat capacity, emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the (TCS) for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the Inverse Heat Transfer Problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the IHTP, based on sensitivity function approach, is presented too. The practical testing was performed for specimen of the real MLI. This paper consists of recent researches, which developed the approach suggested at [1].

  7. Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network

    PubMed Central

    Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu

    2018-01-01

    This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629

  8. Economics of movable interior blankets for greenhouses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, G.B.; Fohner, G.R.; Albright, L.D.

    1981-01-01

    A model for evaluating the economic impact of investment in a movable interior blanket was formulated. The method of analysis was net present value (NPV), in which the discounted, after-tax cash flow of costs and benefits was computed for the useful life of the system. An added feature was a random number component which permitted any or all of the input parameters to be varied within a specified range. Results from 100 computer runs indicated that all of the NPV estimates generated were positive, showing that the investment was profitable. However, there was a wide range of NPV estimates, frommore » $16.00/m/sup 2/ to $86.40/m/sup 2/, with a median value of $49.34/m/sup 2/. Key variables allowed to range in the analysis were: (1) the cost of fuel before the blanket is installed; (2) the percent fuel savings resulting from use of the blanket; (3) the annual real increase in the cost of fuel; and (4) the change in the annual value of the crop. The wide range in NPV estimates indicates the difficulty in making general recommendations regarding the economic feasibility of the investment when uncertainty exists as to the correct values for key variables in commercial settings. The results also point out needed research into the effect of the blanket on the crop, and on performance characteristics of the blanket.« less

  9. Pavement maintenance optimization model using Markov Decision Processes

    NASA Astrophysics Data System (ADS)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  10. Hidden Markov models incorporating fuzzy measures and integrals for protein sequence identification and alignment.

    PubMed

    Bidargaddi, Niranjan P; Chetty, Madhu; Kamruzzaman, Joarder

    2008-06-01

    Profile hidden Markov models (HMMs) based on classical HMMs have been widely applied for protein sequence identification. The formulation of the forward and backward variables in profile HMMs is made under statistical independence assumption of the probability theory. We propose a fuzzy profile HMM to overcome the limitations of that assumption and to achieve an improved alignment for protein sequences belonging to a given family. The proposed model fuzzifies the forward and backward variables by incorporating Sugeno fuzzy measures and Choquet integrals, thus further extends the generalized HMM. Based on the fuzzified forward and backward variables, we propose a fuzzy Baum-Welch parameter estimation algorithm for profiles. The strong correlations and the sequence preference involved in the protein structures make this fuzzy architecture based model as a suitable candidate for building profiles of a given family, since the fuzzy set can handle uncertainties better than classical methods.

  11. Density-based cluster algorithms for the identification of core sets

    NASA Astrophysics Data System (ADS)

    Lemke, Oliver; Keller, Bettina G.

    2016-10-01

    The core-set approach is a discretization method for Markov state models of complex molecular dynamics. Core sets are disjoint metastable regions in the conformational space, which need to be known prior to the construction of the core-set model. We propose to use density-based cluster algorithms to identify the cores. We compare three different density-based cluster algorithms: the CNN, the DBSCAN, and the Jarvis-Patrick algorithm. While the core-set models based on the CNN and DBSCAN clustering are well-converged, constructing core-set models based on the Jarvis-Patrick clustering cannot be recommended. In a well-converged core-set model, the number of core sets is up to an order of magnitude smaller than the number of states in a conventional Markov state model with comparable approximation error. Moreover, using the density-based clustering one can extend the core-set method to systems which are not strongly metastable. This is important for the practical application of the core-set method because most biologically interesting systems are only marginally metastable. The key point is to perform a hierarchical density-based clustering while monitoring the structure of the metric matrix which appears in the core-set method. We test this approach on a molecular-dynamics simulation of a highly flexible 14-residue peptide. The resulting core-set models have a high spatial resolution and can distinguish between conformationally similar yet chemically different structures, such as register-shifted hairpin structures.

  12. Markov chains of infinite order and asymptotic satisfaction of balance: application to the adaptive integration method.

    PubMed

    Earl, David J; Deem, Michael W

    2005-04-14

    Adaptive Monte Carlo methods can be viewed as implementations of Markov chains with infinite memory. We derive a general condition for the convergence of a Monte Carlo method whose history dependence is contained within the simulated density distribution. In convergent cases, our result implies that the balance condition need only be satisfied asymptotically. As an example, we show that the adaptive integration method converges.

  13. A fast exact simulation method for a class of Markov jump processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yao, E-mail: yaoli@math.umass.edu; Hu, Lili, E-mail: lilyhu86@gmail.com

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze itsmore » properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.« less

  14. HIPPI: highly accurate protein family classification with ensembles of HMMs.

    PubMed

    Nguyen, Nam-Phuong; Nute, Michael; Mirarab, Siavash; Warnow, Tandy

    2016-11-11

    Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  15. Inference of epidemiological parameters from household stratified data

    PubMed Central

    Walker, James N.; Ross, Joshua V.

    2017-01-01

    We consider a continuous-time Markov chain model of SIR disease dynamics with two levels of mixing. For this so-called stochastic households model, we provide two methods for inferring the model parameters—governing within-household transmission, recovery, and between-household transmission—from data of the day upon which each individual became infectious and the household in which each infection occurred, as might be available from First Few Hundred studies. Each method is a form of Bayesian Markov Chain Monte Carlo that allows us to calculate a joint posterior distribution for all parameters and hence the household reproduction number and the early growth rate of the epidemic. The first method performs exact Bayesian inference using a standard data-augmentation approach; the second performs approximate Bayesian inference based on a likelihood approximation derived from branching processes. These methods are compared for computational efficiency and posteriors from each are compared. The branching process is shown to be a good approximation and remains computationally efficient as the amount of data is increased. PMID:29045456

  16. Utterance independent bimodal emotion recognition in spontaneous communication

    NASA Astrophysics Data System (ADS)

    Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng

    2011-12-01

    Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.

  17. APT Blanket System Loss-of-Coolant Accident (LOCA) Based on Initial Conceptual Design - Case 4: External Pressurizer Surge Line Break Near Inlet Header

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamm, L.L.

    1998-10-07

    This report is one of a series of reports documenting accident scenario simulations for the Accelerator Production of Tritium (APT) blanket heat removal systems. The simulations were performed in support of the Preliminary Safety Analysis Report (PSAR) for the APT.

  18. APT Blanket System Loss-of-Coolant Accident (LOCA) Analysis Based on Initial Conceptual Design - Case 3: External HR Break at Pump Outlet without Pump Trip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamm, L.L.

    1998-10-07

    This report is one of a series of reports that document normal operation and accident simulations for the Accelerator Production of Tritium (APT) blanket heat removal (HR) system. These simulations were performed for the Preliminary Safety Analysis Report.

  19. Thermally distinct ejecta blankets from Martian craters

    NASA Astrophysics Data System (ADS)

    Betts, B. H.; Murray, B. C.

    1993-06-01

    A study of Martian ejecta blankets is carried out using the high-resolution thermal IR/visible data from the Termoskan instrument aboard Phobos '88 mission. It is found that approximately 100 craters within the Termoskan data have an ejecta blanket distinct in the thermal infrared (EDITH). These features are examined by (1) a systematic examination of all Termoskan data using high-resolution image processing; (2) a study of the systematics of the data by compiling and analyzing a data base consisting of geographic, geologic, and mormphologic parameters for a significant fraction of the EDITH and nearby non-EDITH; and (3) qualitative and quantitative analyses of localized regions of interest. It is noted that thermally distinct ejecta blankets are excellent locations for future landers and remote sensing because of relatively dust-free surface exposures of material excavated from depth.

  20. Concept of a demonstrational hybrid reactor—a tokamak with molten-salt blanket for 233U fuel production: 1. Concept of a stationary Tokamak as a neutron source

    NASA Astrophysics Data System (ADS)

    Azizov, E. A.; Gladush, G. G.; Dokuka, V. N.; Khayrutdinov, R. R.

    2015-12-01

    On the basis of current understanding of physical processes in tokamaks and taking into account engineering constraints, it is shown that a low-cost facility of a moderate size can be designed within the adopted concept. This facility makes it possible to achieve the power density of neutron flux which is of interest, in particular, for solving the problem of 233U fuel production from thorium. By using a molten-salt blanket, the important task of ensuring the safe operation of such a reactor in the case of possible coolant loss is accomplished. Moreover, in a hybrid reactor with the blanket based on liquid salts, the problem of periodic refueling that is difficult to perform in solid blankets can be solved.

  1. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension, Markov random field features, and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-10-01

    In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.

  2. An open Markov chain scheme model for a credit consumption portfolio fed by ARIMA and SARMA processes

    NASA Astrophysics Data System (ADS)

    Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.

    2016-06-01

    We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.

  3. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  4. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  5. Testing Seam Concepts for Advanced Multilayer Insulation

    NASA Technical Reports Server (NTRS)

    Chato, D. J.; Johnson, W. L.; Alberts, Samantha J.

    2017-01-01

    Multilayer insulation (MLI) is considered the state of the art insulation for cryogenic propellant tanks in the space environment. MLI traditionally consists of multiple layers of metalized films separated by low conductivity spacers. In order to better understand some of the details within MLI design and construction, GRC has been investigating the heat loads caused by multiple types of seams. To date testing has been completed with 20 layer and 50 layer blankets. Although a truly seamless blanket is not practical, a blanket lay-up where each individual layer was overlapped and tapped together was used as a baseline for the other seams tests. Other seams concepts tested included: an overlap where the complete blanket was overlapped on top of itself; a butt joint were the blankets were just trimmed and butted up against each other, and a staggered butt joint where the seam in the out layers is offset from the seam in the inner layers. Measured performance is based on a preliminary analysis of rod calibration tests conducted prior to the start of seams testing. Baseline performance for the 50 layer blanket showed a measured heat load of 0.46 Watts with a degradation to about 0.47 Watts in the seamed blankets. Baseline performance for the 20 layer blanket showed a measured heat load of 0.57 Watts. Heat loads for the seamed tests are still begin analyzed. So far analysis work has suggested the need for corrections due to heat loads from both the heater leads and the instrumentation wires. A careful re-examination of the calibration test results with these factors accounted for is also underway. This presentation will discuss the theory of seams in MLI, our test results to date, and the uncertainties in our measurements.

  6. Hamiltonian Markov Chain Monte Carlo Methods for the CUORE Neutrinoless Double Beta Decay Sensitivity

    NASA Astrophysics Data System (ADS)

    Graham, Eleanor; Cuore Collaboration

    2017-09-01

    The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.

  7. Sieve estimation in a Markov illness-death process under dual censoring.

    PubMed

    Boruvka, Audrey; Cook, Richard J

    2016-04-01

    Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Influence of nuclear data uncertainties on thorium fusion-fission hybrid blanket nucleonic performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, E.T.; Mathews, D.R.

    1979-09-01

    The fusion-fission hybrid blanket proposed for the Tandem Mirror Hybrid Reactor employs thorium metal as the fertile material. Based on the ENDF/B-IV nuclear data, the /sup 233/U and tritium production rate and blanket energy multiplication averaged over the blanket lifetime of about 9 MW-yr/m/sup 2/ are 0.76 and 1.12 per D-T neutron and 4.8, respectively. At the time of the blanket discharge, the /sup 233/U enrichment in the thorium metal is about 3%. The thorium cross sections given by the ENDF/B-IV and V were reviewed, and the important partial cross sections such as (n,2n), (n,3n), and (n,..gamma..) were found tomore » be known to +-10 to 20% in the respective energy range of interest. A sensitivity study showed that the /sup 233/U and tritium production rate and blanket energy multiplication are relatively sensitive to the thorium capture and fission cross section uncertainties. In order to predict the above parameters within +-1%, the Th(n,..gamma..) and Th(n,..nu..f) cross sections must be measured within about +-2% in the energy range 3 to 3000 keV and 13.5 to 15 MeV, respectively.« less

  9. Markov Modeling of Component Fault Growth over a Derived Domain of Feasible Output Control Effort Modifications

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.

  10. Landslide Inventory Mapping from Bitemporal 10 m SENTINEL-2 Images Using Change Detection Based Markov Random Field

    NASA Astrophysics Data System (ADS)

    Qin, Y.; Lu, P.; Li, Z.

    2018-04-01

    Landslide inventory mapping is essential for hazard assessment and mitigation. In most previous studies, landslide mapping was achieved by visual interpretation of aerial photos and remote sensing images. However, such method is labor-intensive and time-consuming, especially over large areas. Although a number of semi-automatic landslide mapping methods have been proposed over the past few years, limitations remain in terms of their applicability over different study areas and data, and there is large room for improvement in terms of the accuracy and automation degree. For these reasons, we developed a change detection-based Markov Random Field (CDMRF) method for landslide inventory mapping. The proposed method mainly includes two steps: 1) change detection-based multi-threshold for training samples generation and 2) MRF for landslide inventory mapping. Compared with the previous methods, the proposed method in this study has three advantages: 1) it combines multiple image difference techniques with multi-threshold method to generate reliable training samples; 2) it takes the spectral characteristics of landslides into account; and 3) it is highly automatic with little parameter tuning. The proposed method was applied for regional landslides mapping from 10 m Sentinel-2 images in Western China. Results corroborated the effectiveness and applicability of the proposed method especially the capability of rapid landslide mapping. Some directions for future research are offered. This study to our knowledge is the first attempt to map landslides from free and medium resolution satellite (i.e., Sentinel-2) images in China.

  11. Protein sequences clustering of herpes virus by using Tribe Markov clustering (Tribe-MCL)

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Siswantining, T.; Febriyani, N. L.; Novitasari, I. D.; Cahyaningrum, R. D.

    2017-07-01

    The herpes virus can be found anywhere and one of the important characteristics is its ability to cause acute and chronic infection at certain times so as a result of the infection allows severe complications occurred. The herpes virus is composed of DNA containing protein and wrapped by glycoproteins. In this work, the Herpes viruses family is classified and analyzed by clustering their protein-sequence using Tribe Markov Clustering (Tribe-MCL) algorithm. Tribe-MCL is an efficient clustering method based on the theory of Markov chains, to classify protein families from protein sequences using pre-computed sequence similarity information. We implement the Tribe-MCL algorithm using an open source program of R. We select 24 protein sequences of Herpes virus obtained from NCBI database. The dataset consists of three types of glycoprotein B, F, and H. Each type has eight herpes virus that infected humans. Based on our simulation using different inflation factor r=1.5, 2, 3 we find a various number of the clusters results. The greater the inflation factor the greater the number of their clusters. Each protein will grouped together in the same type of protein.

  12. An 'adding' algorithm for the Markov chain formalism for radiation transfer

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.

    1979-01-01

    An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.

  13. Markov Chain-Based Acute Effect Estimation of Air Pollution on Elder Asthma Hospitalization

    PubMed Central

    Luo, Li; Zhang, Fengyi; Sun, Lin; Li, Chunyang; Huang, Debin; Han, Gao; Wang, Bin

    2017-01-01

    Background Asthma caused substantial economic and health care burden and is susceptible to air pollution. Particularly, when it comes to elder asthma patient (older than 65), the phenomenon is more significant. The aim of this study is to investigate the Markov-based acute effects of air pollution on elder asthma hospitalizations, in forms of transition probabilities. Methods A retrospective, population-based study design was used to assess temporal patterns in hospitalizations for asthma in a region of Sichuan province, China. Approximately 12 million residents were covered during this period. Relative risk analysis and Markov chain model were employed on daily hospitalization state estimation. Results Among PM2.5, PM10, NO2, and SO2, only SO2 was significant. When air pollution is severe, the transition probability from a low-admission state (previous day) to high-admission state (next day) is 35.46%, while it is 20.08% when air pollution is mild. In particular, for female-cold subgroup, the counterparts are 30.06% and 0.01%, respectively. Conclusions SO2 was a significant risk factor for elder asthma hospitalization. When air pollution worsened, the transition probabilities from each state to high admission states increase dramatically. This phenomenon appeared more evidently, especially in female-cold subgroup (which is in cold season for female admissions). Based on our work, admission amount forecast, asthma intervention, and corresponding healthcare allocation can be done. PMID:29147496

  14. A torso model comparison of temperature preservation devices for use in the prehospital environment.

    PubMed

    Zasa, Michele; Flowers, Neil; Zideman, David; Hodgetts, Timothy J; Harris, Tim

    2016-06-01

    Hypothermia is an independent predictor of increased morbidity and mortality in patients with trauma. Several strategies and products have been developed to minimise patients' heat loss in the prehospital arena, but there is little evidence to inform the clinician concerning their effectiveness. We used a human torso model consisting of two 5.5-litre fluid bags to simultaneously compare four passive (space blanket, bubble wrap, Blizzard blanket, ambulance blanket) and one active (Ready-Heat II blanket) temperature preservation products. A torso model without any temperature preservation device provided a control. For each test, the torso models were warmed to 37°C and left outdoors. Core temperatures were recorded every 10 min for 1 h in total; tests were repeated 10 times. A significant difference in temperature was detected among groups at 30 and 60 min (F (1.29, 10.30)=103.58, p<0.001 and F (1.64, 14.78)=163.28, p<0.001, respectively). Mean temperature reductions (95% CI) after 1 h of environmental exposure were the following: 11.6 (10.3 to 12.9) °C in control group, 4.5 (3.9 to 5.1) °C in space blanket group, 3.6 (3 to 4.3) °C in bubble-wrap group, 2.1 (1.7 to 2.5) °C in Blizzard blanket group, 6.1 (5.8 to 6.5) °C in ambulance blanket group and 1.1 (0.7 to 1.6) °C in Ready-Heat II blanket group. In this study, using a torso model based on two 5 L dialysate bags we found the Ready-Heat II heating blanket and Blizzard blanket were associated with lower rates of heat loss after 60 min environmental exposure than the other devices tested. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. Decomposition of conditional probability for high-order symbolic Markov chains.

    PubMed

    Melnik, S S; Usatenko, O V

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  16. Decomposition of conditional probability for high-order symbolic Markov chains

    NASA Astrophysics Data System (ADS)

    Melnik, S. S.; Usatenko, O. V.

    2017-07-01

    The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.

  17. First-wall structural analysis of the self-cooled water blanket concept

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, D.A.; Steiner, D.; Embrechts, M.J.

    1986-01-01

    A novel blanket concept recently proposed utilizes water with small amounts of dissolved lithium compound as both coolant and breeder. The inherent simplicity of this idea should result in an attractive breeding blanket for fusion reactors. In addition, the available base of relevant information accumulated through water-cooled fission reactor programs should greatly facilitate the R and D effort required to validate this concept. First-wall and blanket designs have been developed first for the tandem mirror reactor (TMR) due to the obvious advantages of this geometry. First-wall and blanket designs will also be developed for toroidal reactors. A simple plate designmore » with coolant tubes welded on the back (side away from plasma) was chosen as the first wall for the TMR application. Dimensions and materials were chosen to minimize temperature differences and thermal stresses. A finite element code (STRAW), originally developed for the analysis of core components subjected to high-pressure transients in the fast breeder program, was utilized to evaluate stresses in the first wall.« less

  18. Inferring animal densities from tracking data using Markov chains.

    PubMed

    Whitehead, Hal; Jonsen, Ian D

    2013-01-01

    The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  19. Markov chains and semi-Markov models in time-to-event analysis.

    PubMed

    Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J

    2013-10-25

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.

  20. Markov chains and semi-Markov models in time-to-event analysis

    PubMed Central

    Abner, Erin L.; Charnigo, Richard J.; Kryscio, Richard J.

    2014-01-01

    A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields. PMID:24818062

  1. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  2. Blanket design and optimization demonstrations of the first wall/blanket/shield design and optimization system (BSDOS).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Nuclear Engineering Division

    2005-05-01

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less

  3. Blanket Design and Optimization Demonstrations of the First Wall/Blanket/Shield Design and Optimization System (BSDOS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Yousry

    2005-05-15

    In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less

  4. SMERFS: Stochastic Markov Evaluation of Random Fields on the Sphere

    NASA Astrophysics Data System (ADS)

    Creasey, Peter; Lang, Annika

    2018-04-01

    SMERFS (Stochastic Markov Evaluation of Random Fields on the Sphere) creates large realizations of random fields on the sphere. It uses a fast algorithm based on Markov properties and fast Fourier Transforms in 1d that generates samples on an n X n grid in O(n2 log n) and efficiently derives the necessary conditional covariance matrices.

  5. Modeling the coupled return-spread high frequency dynamics of large tick assets

    NASA Astrophysics Data System (ADS)

    Curato, Gianbiagio; Lillo, Fabrizio

    2015-01-01

    Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We present an approach based on the hidden Markov model, also known in econometrics as the Markov switching model, for the dynamics of price changes, where the latent Markov process is described by the transitions between spreads. We then use a finite Markov mixture of logit regressions on past squared price changes to describe temporal dependencies in the dynamics of price changes. The model can thus be seen as a double chain Markov model. We show that the model describes the shape of the price change distribution at different time scales, volatility clustering, and the anomalous decrease of kurtosis. We calibrate our models based on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.

  6. Markov state models of protein misfolding

    NASA Astrophysics Data System (ADS)

    Sirur, Anshul; De Sancho, David; Best, Robert B.

    2016-02-01

    Markov state models (MSMs) are an extremely useful tool for understanding the conformational dynamics of macromolecules and for analyzing MD simulations in a quantitative fashion. They have been extensively used for peptide and protein folding, for small molecule binding, and for the study of native ensemble dynamics. Here, we adapt the MSM methodology to gain insight into the dynamics of misfolded states. To overcome possible flaws in root-mean-square deviation (RMSD)-based metrics, we introduce a novel discretization approach, based on coarse-grained contact maps. In addition, we extend the MSM methodology to include "sink" states in order to account for the irreversibility (on simulation time scales) of processes like protein misfolding. We apply this method to analyze the mechanism of misfolding of tandem repeats of titin domains, and how it is influenced by confinement in a chaperonin-like cavity.

  7. Markov switching multinomial logit model: An application to accident-injury severities.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2009-07-01

    In this study, two-state Markov switching multinomial logit models are proposed for statistical modeling of accident-injury severities. These models assume Markov switching over time between two unobserved states of roadway safety as a means of accounting for potential unobserved heterogeneity. The states are distinct in the sense that in different states accident-severity outcomes are generated by separate multinomial logit processes. To demonstrate the applicability of the approach, two-state Markov switching multinomial logit models are estimated for severity outcomes of accidents occurring on Indiana roads over a four-year time period. Bayesian inference methods and Markov Chain Monte Carlo (MCMC) simulations are used for model estimation. The estimated Markov switching models result in a superior statistical fit relative to the standard (single-state) multinomial logit models for a number of roadway classes and accident types. It is found that the more frequent state of roadway safety is correlated with better weather conditions and that the less frequent state is correlated with adverse weather conditions.

  8. Appraisal of jump distributions in ensemble-based sampling algorithms

    NASA Astrophysics Data System (ADS)

    Dejanic, Sanda; Scheidegger, Andreas; Rieckermann, Jörg; Albert, Carlo

    2017-04-01

    Sampling Bayesian posteriors of model parameters is often required for making model-based probabilistic predictions. For complex environmental models, standard Monte Carlo Markov Chain (MCMC) methods are often infeasible because they require too many sequential model runs. Therefore, we focused on ensemble methods that use many Markov chains in parallel, since they can be run on modern cluster architectures. Little is known about how to choose the best performing sampler, for a given application. A poor choice can lead to an inappropriate representation of posterior knowledge. We assessed two different jump moves, the stretch and the differential evolution move, underlying, respectively, the software packages EMCEE and DREAM, which are popular in different scientific communities. For the assessment, we used analytical posteriors with features as they often occur in real posteriors, namely high dimensionality, strong non-linear correlations or multimodality. For posteriors with non-linear features, standard convergence diagnostics based on sample means can be insufficient. Therefore, we resorted to an entropy-based convergence measure. We assessed the samplers by means of their convergence speed, robustness and effective sample sizes. For posteriors with strongly non-linear features, we found that the stretch move outperforms the differential evolution move, w.r.t. all three aspects.

  9. Handwritten digits recognition using HMM and PSO based on storks

    NASA Astrophysics Data System (ADS)

    Yan, Liao; Jia, Zhenhong; Yang, Jie; Pang, Shaoning

    2010-07-01

    A new method for handwritten digits recognition based on hidden markov model (HMM) and particle swarm optimization (PSO) is proposed. This method defined 24 strokes with the sense of directional, to make up for the shortage that is sensitive in choice of stating point in traditional methods, but also reduce the ambiguity caused by shakes. Make use of excellent global convergence of PSO; improving the probability of finding the optimum and avoiding local infinitesimal obviously. Experimental results demonstrate that compared with the traditional methods, the proposed method can make most of the recognition rate of handwritten digits improved.

  10. Decoding and modelling of time series count data using Poisson hidden Markov model and Markov ordinal logistic regression models.

    PubMed

    Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I

    2018-01-01

    Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.

  11. Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes

    PubMed Central

    Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide

    2017-01-01

    Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889

  12. A Space Acquisition Leading Indicator Based on System Interoperation Maturity

    DTIC Science & Technology

    2010-12-01

    delivered hardware, improper use of soldering materials, improper installation of thermal blankets , and missing test procedure documentation A poor...office, the first GEO integrated payload and spacecraft successfully completed thermal vacuum (TVAC) testing in November 2009. Program officials...contamination in delivered hardware, improper use of soldering materials, improper installation of thermal blankets , and missing test procedure

  13. 18 CFR 284.403 - Code of conduct for persons holding blanket marketing certificates.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Code of conduct for persons holding blanket marketing certificates. 284.403 Section 284.403 Conservation of Power and Water... information upon which it billed the prices it charged for the natural gas sold pursuant to its market based...

  14. A semi-Markov model for mitosis segmentation in time-lapse phase contrast microscopy image sequences of stem cell populations.

    PubMed

    Liu, An-An; Li, Kang; Kanade, Takeo

    2012-02-01

    We propose a semi-Markov model trained in a max-margin learning framework for mitosis event segmentation in large-scale time-lapse phase contrast microscopy image sequences of stem cell populations. Our method consists of three steps. First, we apply a constrained optimization based microscopy image segmentation method that exploits phase contrast optics to extract candidate subsequences in the input image sequence that contains mitosis events. Then, we apply a max-margin hidden conditional random field (MM-HCRF) classifier learned from human-annotated mitotic and nonmitotic sequences to classify each candidate subsequence as a mitosis or not. Finally, a max-margin semi-Markov model (MM-SMM) trained on manually-segmented mitotic sequences is utilized to reinforce the mitosis classification results, and to further segment each mitosis into four predefined temporal stages. The proposed method outperforms the event-detection CRF model recently reported by Huh as well as several other competing methods in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells. For mitosis detection, an overall precision of 95.8% and a recall of 88.1% were achieved. For mitosis segmentation, the mean and standard deviation for the localization errors of the start and end points of all mitosis stages were well below 1 and 2 frames, respectively. In particular, an overall temporal location error of 0.73 ± 1.29 frames was achieved for locating daughter cell birth events.

  15. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods

    ERIC Educational Resources Information Center

    Kim, Jee-Seon; Bolt, Daniel M.

    2007-01-01

    The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…

  16. Neutronics Analysis of Water-Cooled Ceramic Breeder Blanket for CFETR

    NASA Astrophysics Data System (ADS)

    Zhu, Qingjun; Li, Jia; Liu, Songlin

    2016-07-01

    In order to investigate the nuclear response to the water-cooled ceramic breeder blanket models for CFETR, a detailed 3D neutronics model with 22.5° torus sector was developed based on the integrated geometry of CFETR, including heterogeneous WCCB blanket models, shield, divertor, vacuum vessel, toroidal and poloidal magnets, and ports. Using the Monte Carlo N-Particle Transport Code MCNP5 and IAEA Fusion Evaluated Nuclear Data Library FENDL2.1, the neutronics analyses were performed. The neutron wall loading, tritium breeding ratio, the nuclear heating, neutron-induced atomic displacement damage, and gas production were determined. The results indicate that the global TBR of no less than 1.2 will be a big challenge for the water-cooled ceramic breeder blanket for CFETR. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2013GB108004, 2014GB122000, and 2014GB119000), and National Natural Science Foundation of China (No. 11175207)

  17. Preliminary testing for the Markov property of the fifteen chromatin states of the Broad Histone Track.

    PubMed

    Lee, Kyung-Eun; Park, Hyun-Seok

    2015-01-01

    Epigenetic computational analyses based on Markov chains can integrate dependencies between regions in the genome that are directly adjacent. In this paper, the BED files of fifteen chromatin states of the Broad Histone Track of the ENCODE project are parsed, and comparative nucleotide frequencies of regional chromatin blocks are thoroughly analyzed to detect the Markov property in them. We perform various tests to examine the Markov property embedded in a frequency domain by checking for the presence of the Markov property in the various chromatin states. We apply these tests to each region of the fifteen chromatin states. The results of our simulation indicate that some of the chromatin states possess a stronger Markov property than others. We discuss the significance of our findings in statistical models of nucleotide sequences that are necessary for the computational analysis of functional units in noncoding DNA.

  18. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  19. Study of Automated Module Fabrication for Lightweight Solar Blanket Utilization

    NASA Technical Reports Server (NTRS)

    Gibson, C. E.

    1979-01-01

    Cost-effective automated techniques for accomplishing the titled purpose; based on existing in-house capability are described. As a measure of the considered automation, the production of a 50 kilowatt solar array blanket, exclusive of support and deployment structure, within an eight-month fabrication period was used. Solar cells considered for this blanket were 2 x 4 x .02 cm wrap-around cells, 2 x 2 x .005 cm and 3 x 3 x .005 cm standard bar contact thin cells, all welded contacts. Existing fabrication processes are described, the rationale for each process is discussed, and the capability for further automation is discussed.

  20. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  1. A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Gossman, W. E.

    1986-01-01

    A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.

  2. Optimized mixed Markov models for motif identification

    PubMed Central

    Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping

    2006-01-01

    Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929

  3. Predicting human immunodeficiency virus inhibitors using multi-dimensional Bayesian network classifiers.

    PubMed

    Borchani, Hanen; Bielza, Concha; Toro, Carlos; Larrañaga, Pedro

    2013-03-01

    Our aim is to use multi-dimensional Bayesian network classifiers in order to predict the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors given an input set of respective resistance mutations that an HIV patient carries. Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models especially designed to solve multi-dimensional classification problems, where each input instance in the data set has to be assigned simultaneously to multiple output class variables that are not necessarily binary. In this paper, we introduce a new method, named MB-MBC, for learning MBCs from data by determining the Markov blanket around each class variable using the HITON algorithm. Our method is applied to both reverse transcriptase and protease data sets obtained from the Stanford HIV-1 database. Regarding the prediction of antiretroviral combination therapies, the experimental study shows promising results in terms of classification accuracy compared with state-of-the-art MBC learning algorithms. For reverse transcriptase inhibitors, we get 71% and 11% in mean and global accuracy, respectively; while for protease inhibitors, we get more than 84% and 31% in mean and global accuracy, respectively. In addition, the analysis of MBC graphical structures lets us gain insight into both known and novel interactions between reverse transcriptase and protease inhibitors and their respective resistance mutations. MB-MBC algorithm is a valuable tool to analyze the HIV-1 reverse transcriptase and protease inhibitors prediction problem and to discover interactions within and between these two classes of inhibitors. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Radiative-conductive inverse problem for lumped parameter systems

    NASA Astrophysics Data System (ADS)

    Alifanov, O. M.; Nenarokomov, A. V.; Gonzalez, V. M.

    2008-11-01

    The purpose of this paper is to introduce a iterative regularization method in the research of radiative and thermal properties of materials with applications in the design of Thermal Control Systems (TCS) of spacecrafts. In this paper the radiative and thermal properties (emissivity and thermal conductance) of a multilayered thermal-insulating blanket (MLI), which is a screen-vacuum thermal insulation as a part of the (TCS) for perspective spacecrafts, are estimated. Properties of the materials under study are determined in the result of temperature and heat flux measurement data processing based on the solution of the Inverse Heat Transfer Problem (IHTP) technique. Given are physical and mathematical models of heat transfer processes in a specimen of the multilayered thermal-insulating blanket located in the experimental facility. A mathematical formulation of the inverse heat conduction problem is presented too. The practical testing were performed for specimen of the real MLI.

  5. Heating performances of a IC in-blanket ring array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosia, G., E-mail: gbosia@to.infn.it; Ragona, R.

    2015-12-10

    An important limiting factor to the use of ICRF as candidate heating method in a commercial reactor is due to the evanescence of the fast wave in vacuum and in most of the SOL layer, imposing proximity of the launching structure to the plasma boundary and causing, at the highest power level, high RF standing and DC rectified voltages at the plasma periphery, with frequent voltage breakdowns and enhanced local wall loading. In a previous work [1] the concept for an Ion Cyclotron Heating & Current Drive array (and using a different wave guide technology, a Lower Hybrid array) basedmore » on the use of periodic ring structure, integrated in the reactor blanket first wall and operating at high input power and low power density, was introduced. Based on the above concept, the heating performance of such array operating on a commercial fusion reactor is estimated.« less

  6. Analyses of Hubble Space Telescope Aluminized-Teflon Insulation Retrieved After 19 Years of Space Exposure

    NASA Technical Reports Server (NTRS)

    deGroh, Kim K.; Waters, Deborah L.; Mohammed, Jelila S.; Perry, Bruce A.; Banks, Bruce A.

    2012-01-01

    Since its launch in April 1990, the Hubble Space Telescope (HST) has made many important observations from its vantage point in low Earth orbit (LEO). However, as seen during five servicing missions, the outer layer of multilayer insulation (MLI) has become successively more embrittled and has cracked in many areas. In May 2009, during the 5th servicing mission (called SM4), two MLI blankets were replaced with new insulation pieces and the space-exposed MLI blankets were retrieved for degradation analyses by teams at NASA Glenn Research Center (GRC) and NASA Goddard Space Flight Center (GSFC). The MLI blankets were from Equipment Bay 8, which received direct sunlight, and Equipment Bay 5, which received grazing sunlight. Each blanket contained a range of unique regions based on environmental exposure and/or physical appearance. The retrieved MLI blanket s aluminized-Teflon (DuPont) fluorinated ethylene propylene (Al-FEP) outer layers have been analyzed for changes in optical, physical, and mechanical properties, along with space induced chemical and morphological changes. When compared to pristine material, the analyses have shown how the Al-FEP was severely affected by the space environment. This paper reviews tensile properties, solar absorptance, thermal emittance, x-ray photoelectron spectroscopy (XPS) data and atomic oxygen erosion values of the retrieved HST blankets after 19 years of space exposure.

  7. On the utility of the multi-level algorithm for the solution of nearly completely decomposable Markov chains

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Horton, Graham

    1994-01-01

    Recently the Multi-Level algorithm was introduced as a general purpose solver for the solution of steady state Markov chains. In this paper, we consider the performance of the Multi-Level algorithm for solving Nearly Completely Decomposable (NCD) Markov chains, for which special-purpose iteractive aggregation/disaggregation algorithms such as the Koury-McAllister-Stewart (KMS) method have been developed that can exploit the decomposability of the the Markov chain. We present experimental results indicating that the general-purpose Multi-Level algorithm is competitive, and can be significantly faster than the special-purpose KMS algorithm when Gauss-Seidel and Gaussian Elimination are used for solving the individual blocks.

  8. Electromagnetic Launch Vehicle Fairing and Acoustic Blanket Model of Received Power Using FEKO

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.

    2011-01-01

    Evaluating the impact of radio frequency transmission in vehicle fairings is important to electromagnetically sensitive spacecraft. This study employs the multilevel fast multipole method (MLFMM) from a commercial electromagnetic tool, FEKO, to model the fairing electromagnetic environment in the presence of an internal transmitter with improved accuracy over industry applied techniques. This fairing model includes material properties representative of acoustic blanketing commonly used in vehicles. Equivalent surface material models within FEKO were successfully applied to simulate the test case. Finally, a simplified model is presented using Nicholson Ross Weir derived blanket material properties. These properties are implemented with the coated metal option to reduce the model to one layer within the accuracy of the original three layer simulation.

  9. METHOD AND APPARATUS FOR IMPROVING PERFORMANCE OF A FAST REACTOR

    DOEpatents

    Koch, L.J.

    1959-01-20

    A specific arrangement of the fertile material and fissionable material in the active portion of a fast reactor to achieve improvement in performance and to effectively lower the operating temperatures in the center of the reactor is described. According to this invention a group of fuel elements containing fissionable material are assembled to form a hollow fuel core. Elements containing a fertile material, such as depleted uranium, are inserted into the interior of the fuel core to form a central blanket. Additional elemenis of fertile material are arranged about the fuel core to form outer blankets which in tunn are surrounded by a reflector. This arrangement of fuel core and blankets results in substantial flattening of the flux pattern.

  10. Laser or charged-particle-beam fusion reactor with direct electric generation by magnetic flux compression

    DOEpatents

    Lasche, G.P.

    1983-09-29

    The invention is a laser or particle-beam-driven fusion reactor system which takes maximum advantage of both the very short pulsed nature of the energy release of inertial confinement fusion (ICF) and the very small volumes within which the thermonuclear burn takes place. The pulsed nature of ICF permits dynamic direct energy conversion schemes such as magnetohydrodynamic (MHD) generation and magnetic flux compression; the small volumes permit very compact blanket geometries. By fully exploiting these characteristics of ICF, it is possible to design a fusion reactor with exceptionally high power density, high net electric efficiency, and low neutron-induced radioactivity. The invention includes a compact blanket design and method and apparatus for obtaining energy utilizing the compact blanket.

  11. SAR Image Change Detection Based on Fuzzy Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Huang, G.; Zhao, Z.

    2018-04-01

    Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.

  12. Markov Chain Monte Carlo: an introduction for epidemiologists

    PubMed Central

    Hamra, Ghassan; MacLehose, Richard; Richardson, David

    2013-01-01

    Markov Chain Monte Carlo (MCMC) methods are increasingly popular among epidemiologists. The reason for this may in part be that MCMC offers an appealing approach to handling some difficult types of analyses. Additionally, MCMC methods are those most commonly used for Bayesian analysis. However, epidemiologists are still largely unfamiliar with MCMC. They may lack familiarity either with he implementation of MCMC or with interpretation of the resultant output. As with tutorials outlining the calculus behind maximum likelihood in previous decades, a simple description of the machinery of MCMC is needed. We provide an introduction to conducting analyses with MCMC, and show that, given the same data and under certain model specifications, the results of an MCMC simulation match those of methods based on standard maximum-likelihood estimation (MLE). In addition, we highlight examples of instances in which MCMC approaches to data analysis provide a clear advantage over MLE. We hope that this brief tutorial will encourage epidemiologists to consider MCMC approaches as part of their analytic tool-kit. PMID:23569196

  13. Passive acoustic leak detection for sodium cooled fast reactors using hidden Markov models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riber Marklund, A.; Kishore, S.; Prakash, V.

    2015-07-01

    Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970's and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), themore » proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control. (authors)« less

  14. Numerical simulations of piecewise deterministic Markov processes with an application to the stochastic Hodgkin-Huxley model.

    PubMed

    Ding, Shaojie; Qian, Min; Qian, Hong; Zhang, Xuejuan

    2016-12-28

    The stochastic Hodgkin-Huxley model is one of the best-known examples of piecewise deterministic Markov processes (PDMPs), in which the electrical potential across a cell membrane, V(t), is coupled with a mesoscopic Markov jump process representing the stochastic opening and closing of ion channels embedded in the membrane. The rates of the channel kinetics, in turn, are voltage-dependent. Due to this interdependence, an accurate and efficient sampling of the time evolution of the hybrid stochastic systems has been challenging. The current exact simulation methods require solving a voltage-dependent hitting time problem for multiple path-dependent intensity functions with random thresholds. This paper proposes a simulation algorithm that approximates an alternative representation of the exact solution by fitting the log-survival function of the inter-jump dwell time, H(t), with a piecewise linear one. The latter uses interpolation points that are chosen according to the time evolution of the H(t), as the numerical solution to the coupled ordinary differential equations of V(t) and H(t). This computational method can be applied to all PDMPs. Pathwise convergence of the approximated sample trajectories to the exact solution is proven, and error estimates are provided. Comparison with a previous algorithm that is based on piecewise constant approximation is also presented.

  15. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  16. Non-LTE line-blanketed model atmospheres of hot stars. 1: Hybrid complete linearization/accelerated lambda iteration method

    NASA Technical Reports Server (NTRS)

    Hubeny, I.; Lanz, T.

    1995-01-01

    A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.

  17. Comparison of statistical algorithms for detecting homogeneous river reaches along a longitudinal continuum

    NASA Astrophysics Data System (ADS)

    Leviandier, Thierry; Alber, A.; Le Ber, F.; Piégay, H.

    2012-02-01

    Seven methods designed to delineate homogeneous river segments, belonging to four families, namely — tests of homogeneity, contrast enhancing, spatially constrained classification, and hidden Markov models — are compared, firstly on their principles, then on a case study, and on theoretical templates. These templates contain patterns found in the case study but not considered in the standard assumptions of statistical methods, such as gradients and curvilinear structures. The influence of data resolution, noise and weak satisfaction of the assumptions underlying the methods is investigated. The control of the number of reaches obtained in order to achieve meaningful comparisons is discussed. No method is found that outperforms all the others on all trials. However, the methods with sequential algorithms (keeping at order n + 1 all breakpoints found at order n) fail more often than those running complete optimisation at any order. The Hubert-Kehagias method and Hidden Markov Models are the most successful at identifying subpatterns encapsulated within the templates. Ergodic Hidden Markov Models are, moreover, liable to exhibit transition areas.

  18. Diagonal couplings of quantum Markov chains

    NASA Astrophysics Data System (ADS)

    Kümmerer, Burkhard; Schwieger, Kay

    2016-05-01

    In this paper we extend the coupling method from classical probability theory to quantum Markov chains on atomic von Neumann algebras. In particular, we establish a coupling inequality, which allow us to estimate convergence rates by analyzing couplings. For a given tensor dilation we construct a self-coupling of a Markov operator. It turns out that the coupling is a dual version of the extended dual transition operator studied by Gohm et al. We deduce that this coupling is successful if and only if the dilation is asymptotically complete.

  19. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    NASA Astrophysics Data System (ADS)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  20. A Gibbs sampler for Bayesian analysis of site-occupancy data

    USGS Publications Warehouse

    Dorazio, Robert M.; Rodriguez, Daniel Taylor

    2012-01-01

    1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.

  1. Histogram equalization with Bayesian estimation for noise robust speech recognition.

    PubMed

    Suh, Youngjoo; Kim, Hoirin

    2018-02-01

    The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.

  2. Probability distributions for Markov chain based quantum walks

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.

    2018-01-01

    We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.

  3. Diffusion Geometry Based Nonlinear Methods for Hyperspectral Change Detection

    DTIC Science & Technology

    2010-05-12

    for matching biological spectra across a data base of hyperspectral pathology slides acquires with different instruments in different conditions, as...generalizing wavelets and similar scaling mechanisms. Plain Sight Systems, Inc. -7- Proprietary and Confidential To be specific, let the bi-Markov...remarkably well. Conventional nearest neighbor search , compared with a diffusion search. The data is a pathology slide ,each pixel is a digital

  4. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  5. SEAL Studies of Variant Blanket Concepts and Materials

    NASA Astrophysics Data System (ADS)

    Cook, I.; Taylor, N. P.; Forty, C. B. A.; Han, W. E.

    1997-09-01

    Within the European SEAL ( Safety and Environmental Assessment of fusion power, Long-term) program, safety and environmental assessments have been performed which extend the results of the earlier SEAFP (Safety and Environmental Assessment of Fusion Power) program to a wider range of blanket designs and material choices. The four blanket designs analysed were those which had been developed within the Blanket program of the European Fusion Programme. All four are based on martensitic steel as structural material, and otherwise may be summarized as: water-cooled lithium-lead; dual-cooled lithium-lead; helium-cooled lithium silicate (BOT geometry); helium-cooled lithium aluminate (or zirconate) (BIT geometry). The results reveal that all the blankets show the favorable S&E characteristics of fusion, though there are interesting and significant differences between them. The key results are described. Assessments have also been performed of a wider range of materials than was considered in SEAFP. These were: an alternative vanadium alloy, an alternative low-activation martensitic steel, titanium-aluminum intermetallic, and SiC composite. Assessed impurities were included in the compositions, and these had very important effects upon some of the results. Key results impacting upon accident characteristics, recycling, and waste management are described.

  6. A simplified parsimonious higher order multivariate Markov chain model with new convergence condition

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.

  7. Image segmentation using hidden Markov Gauss mixture models.

    PubMed

    Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M

    2007-07-01

    Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.

  8. Surgical gesture segmentation and recognition.

    PubMed

    Tao, Lingling; Zappella, Luca; Hager, Gregory D; Vidal, René

    2013-01-01

    Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.

  9. [Prediction method of rural landscape pattern evolution based on life cycle: a case study of Jinjing Town, Hunan Province, China].

    PubMed

    Ji, Xiang; Liu, Li-Ming; Li, Hong-Qing

    2014-11-01

    Taking Jinjing Town in Dongting Lake area as a case, this paper analyzed the evolution of rural landscape patterns by means of life cycle theory, simulated the evolution cycle curve, and calculated its evolution period, then combining CA-Markov model, a complete prediction model was built based on the rule of rural landscape change. The results showed that rural settlement and paddy landscapes of Jinjing Town would change most in 2020, with the rural settlement landscape increased to 1194.01 hm2 and paddy landscape greatly reduced to 3090.24 hm2. The quantitative and spatial prediction accuracies of the model were up to 99.3% and 96.4%, respectively, being more explicit than single CA-Markov model. The prediction model of rural landscape patterns change proposed in this paper would be helpful for rural landscape planning in future.

  10. From empirical data to time-inhomogeneous continuous Markov processes.

    PubMed

    Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G

    2016-03-01

    We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.

  11. SMURFLite: combining simplified Markov random fields with simulated evolution improves remote homology detection for beta-structural proteins into the twilight zone.

    PubMed

    Daniels, Noah M; Hosur, Raghavendra; Berger, Bonnie; Cowen, Lenore J

    2012-05-01

    One of the most successful methods to date for recognizing protein sequences that are evolutionarily related has been profile hidden Markov models (HMMs). However, these models do not capture pairwise statistical preferences of residues that are hydrogen bonded in beta sheets. These dependencies have been partially captured in the HMM setting by simulated evolution in the training phase and can be fully captured by Markov random fields (MRFs). However, the MRFs can be computationally prohibitive when beta strands are interleaved in complex topologies. We introduce SMURFLite, a method that combines both simplified MRFs and simulated evolution to substantially improve remote homology detection for beta structures. Unlike previous MRF-based methods, SMURFLite is computationally feasible on any beta-structural motif. We test SMURFLite on all propeller and barrel folds in the mainly-beta class of the SCOP hierarchy in stringent cross-validation experiments. We show a mean 26% (median 16%) improvement in area under curve (AUC) for beta-structural motif recognition as compared with HMMER (a well-known HMM method) and a mean 33% (median 19%) improvement as compared with RAPTOR (a well-known threading method) and even a mean 18% (median 10%) improvement in AUC over HHPred (a profile-profile HMM method), despite HHpred's use of extensive additional training data. We demonstrate SMURFLite's ability to scale to whole genomes by running a SMURFLite library of 207 beta-structural SCOP superfamilies against the entire genome of Thermotoga maritima, and make over a 100 new fold predictions. Availability and implementaion: A webserver that runs SMURFLite is available at: http://smurf.cs.tufts.edu/smurflite/

  12. Combined state and parameter identification of nonlinear structural dynamical systems based on Rao-Blackwellization and Markov chain Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Abhinav, S.; Manohar, C. S.

    2018-03-01

    The problem of combined state and parameter estimation in nonlinear state space models, based on Bayesian filtering methods, is considered. A novel approach, which combines Rao-Blackwellized particle filters for state estimation with Markov chain Monte Carlo (MCMC) simulations for parameter identification, is proposed. In order to ensure successful performance of the MCMC samplers, in situations involving large amount of dynamic measurement data and (or) low measurement noise, the study employs a modified measurement model combined with an importance sampling based correction. The parameters of the process noise covariance matrix are also included as quantities to be identified. The study employs the Rao-Blackwellization step at two stages: one, associated with the state estimation problem in the particle filtering step, and, secondly, in the evaluation of the ratio of likelihoods in the MCMC run. The satisfactory performance of the proposed method is illustrated on three dynamical systems: (a) a computational model of a nonlinear beam-moving oscillator system, (b) a laboratory scale beam traversed by a loaded trolley, and (c) an earthquake shake table study on a bending-torsion coupled nonlinear frame subjected to uniaxial support motion.

  13. Searching Remote Homology with Spectral Clustering with Symmetry in Neighborhood Cluster Kernels

    PubMed Central

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of “recent” paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. Contact: sarkar@labri.fr. PMID:23457439

  14. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    PubMed

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. sarkar@labri.fr.

  15. Clinical considerations in the use of forced-air warming blankets during orthognathic surgery to avoid postanesthetic shivering

    PubMed Central

    Park, Fiona Daye; Park, Sookyung; Chi, Seong-In; Kim, Hyun Jeong; Kim, Hye-Jung; Han, Jin-Hee; Han, Hee-Jeong; Lee, Eun-Hee

    2015-01-01

    Background During head and neck surgery including orthognathic surgery, mild intraoperative hypothermia occurs frequently. Hypothermia is associated with postanesthetic shivering, which may increase the risk of other postoperative complications. To improve intraoperative thermoregulation, devices such as forced-air warming blankets can be applied. This study aimed to evaluate the effect of supplemental forced-air warming blankets in preventing postanesthetic shivering. Methods This retrospective study included 113 patients who underwent orthognathic surgery between March and September 2015. According to the active warming method utilized during surgery, patients were divided into two groups: Group W (n = 55), circulating-water mattress; and Group F (n = 58), circulating-water mattress and forced-air warming blanket. Surgical notes and anesthesia and recovery room records were evaluated. Results Initial axillary temperatures did not significantly differ between groups (Group W = 35.9 ± 0.7℃, Group F = 35.8 ± 0.6℃). However, at the end of surgery, the temperatures in Group W were significantly lower than those in Group F (35.2 ± 0.5℃ and 36.2 ± 0.5℃, respectively, P = 0.04). The average body temperatures in Groups W and F were, respectively, 35.9 ± 0.5℃ and 36.2 ± 0.5℃ (P = 0.0001). In Group W, 24 patients (43.6%) experienced postanesthetic shivering, while in Group F, only 12 (20.7%) patients required treatment for postanesthetic shivering (P = 0.009, odds ratio = 0.333, 95% confidence interval: 0.147–0.772). Conclusions Additional use of forced-air warming blankets in orthognathic surgery was superior in maintaining normothermia and reduced the incidence of postanesthetic shivering. PMID:28879279

  16. Saliency Detection via Absorbing Markov Chain With Learnt Transition Probability.

    PubMed

    Lihe Zhang; Jianwu Ai; Bowen Jiang; Huchuan Lu; Xiukui Li

    2018-02-01

    In this paper, we propose a bottom-up saliency model based on absorbing Markov chain (AMC). First, a sparsely connected graph is constructed to capture the local context information of each node. All image boundary nodes and other nodes are, respectively, treated as the absorbing nodes and transient nodes in the absorbing Markov chain. Then, the expected number of times from each transient node to all other transient nodes can be used to represent the saliency value of this node. The absorbed time depends on the weights on the path and their spatial coordinates, which are completely encoded in the transition probability matrix. Considering the importance of this matrix, we adopt different hierarchies of deep features extracted from fully convolutional networks and learn a transition probability matrix, which is called learnt transition probability matrix. Although the performance is significantly promoted, salient objects are not uniformly highlighted very well. To solve this problem, an angular embedding technique is investigated to refine the saliency results. Based on pairwise local orderings, which are produced by the saliency maps of AMC and boundary maps, we rearrange the global orderings (saliency value) of all nodes. Extensive experiments demonstrate that the proposed algorithm outperforms the state-of-the-art methods on six publicly available benchmark data sets.

  17. Colonoscopy video quality assessment using hidden Markov random fields

    NASA Astrophysics Data System (ADS)

    Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.

  18. Application of Markov Models for Analysis of Development of Psychological Characteristics

    ERIC Educational Resources Information Center

    Kuravsky, Lev S.; Malykh, Sergey B.

    2004-01-01

    A technique to study combined influence of environmental and genetic factors on the base of changes in phenotype distributions is presented. Histograms are exploited as base analyzed characteristics. A continuous time, discrete state Markov process with piece-wise constant interstate transition rates is associated with evolution of each histogram.…

  19. Markov Random Fields, Stochastic Quantization and Image Analysis

    DTIC Science & Technology

    1990-01-01

    Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.

  20. Analyses of Hubble Space Telescope Aluminized-Teflon Multilayer Insulation Blankets Retrieved After 19 Years of Space Exposure

    NASA Technical Reports Server (NTRS)

    de Groh, Kim K.; Perry, Bruce A.; Mohammed, Jelila S.; Banks, Bruce

    2015-01-01

    Since its launch in April 1990, the Hubble Space Telescope (HST) has made many important observations from its vantage point in low Earth orbit (LEO). However, as seen during five servicing missions, the outer layer of multilayer insulation (MLI) has become increasingly embrittled and has cracked in many areas. In May 2009, during the 5th servicing mission (called SM4), two MLI blankets were replaced with new insulation and the space-exposed MLI blankets were retrieved for degradation analyses by teams at NASA Glenn Research Center (GRC) and NASA Goddard Space Flight Center (GSFC). The retrieved MLI blankets were from Equipment Bay 8, which received direct sunlight, and Equipment Bay 5, which received grazing sunlight. Each blanket was divided into several regions based on environmental exposure and/or physical appearance. The aluminized-Teflon (DuPont, Wilmington, DE) fluorinated ethylene propylene (Al-FEP) outer layers of the retrieved MLI blankets have been analyzed for changes in optical, physical, and mechanical properties, along with chemical and morphological changes. Pristine and as-retrieved samples (materials) were heat treated to help understand degradation mechanisms. When compared to pristine material, the analyses have shown how the Al-FEP was severely affected by the space environment. Most notably, the Al-FEP was highly embrittled, fracturing like glass at strains of 1 to 8 percent. Across all measured properties, more significant degradation was observed for Bay 8 material as compared to Bay 5 material. This paper reviews the tensile and bend-test properties, density, thickness, solar absorptance, thermal emittance, x-ray photoelectron spectroscopy (XPS) and energy dispersive spectroscopy (EDS) elemental composition measurements, surface and crack morphologies, and atomic oxygen erosion yields of the Al-FEP outer layer of the retrieved HST blankets after 19 years of space exposure.

  1. Neutronics Comparison Analysis of the Water Cooled Ceramics Breeding Blanket for CFETR

    NASA Astrophysics Data System (ADS)

    Li, Jia; Zhang, Xiaokang; Gao, Fangfang; Pu, Yong

    2016-02-01

    China Fusion Engineering Test Reactor (CFETR) is an ITER-like fusion engineering test reactor that is intended to fill the scientific and technical gaps between ITER and DEMO. One of the main missions of CFETR is to achieve a tritium breeding ratio that is no less than 1.2 to ensure tritium self-sufficiency. A concept design for a water cooled ceramics breeding blanket (WCCB) is presented based on a scheme with the breeder and the multiplier located in separate panels for CFETR. Based on this concept, a one-dimensional (1D) radial built breeding blanket was first designed, and then several three-dimensional models were developed with various neutron source definitions and breeding blanket module arrangements based on the 1D radial build. A set of nuclear analyses have been carried out to compare the differences in neutronics characteristics given by different calculation models, addressing neutron wall loading (NWL), tritium breeding ratio (TBR), fast neutron flux on inboard side and nuclear heating deposition on main in-vessel components. The impact of differences in modeling on the nuclear performance has been analyzed and summarized regarding the WCCB concept design. supported by the National Special Project for Magnetic Confined Nuclear Fusion Energy (Nos. 2013GB108004, 2014GB122000, and 2014GB119000), and National Natural Science Foundation of China (No. 11175207)

  2. Temperature scaling method for Markov chains.

    PubMed

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  3. Modelling Risk to US Military Populations from Stopping Blanket Mandatory Polio Vaccination.

    PubMed

    Burgess, Colleen; Burgess, Andrew; McMullen, Kellie

    2017-01-01

    Transmission of polio poses a threat to military forces when deploying to regions where such viruses are endemic. US-born soldiers generally enter service with immunity resulting from childhood immunization against polio; moreover, new recruits are routinely vaccinated with inactivated poliovirus vaccine (IPV), supplemented based upon deployment circumstances. Given residual protection from childhood vaccination, risk-based vaccination may sufficiently protect troops from polio transmission. This analysis employed a mathematical system for polio transmission within military populations interacting with locals in a polio-endemic region to evaluate changes in vaccination policy. Removal of blanket immunization had no effect on simulated polio incidence among deployed military populations when risk-based immunization was employed; however, when these individuals reintegrated with their base populations, risk of transmission to nondeployed personnel increased by 19%. In the absence of both blanket- and risk-based immunization, transmission to nondeployed populations increased by 25%. The overall number of new infections among nondeployed populations was negligible for both scenarios due to high childhood immunization rates, partial protection against transmission conferred by IPV, and low global disease incidence levels. Risk-based immunization driven by deployment to polio-endemic regions is sufficient to prevent transmission among both deployed and nondeployed US military populations.

  4. Hidden Markov Model-Based CNV Detection Algorithms for Illumina Genotyping Microarrays.

    PubMed

    Seiser, Eric L; Innocenti, Federico

    2014-01-01

    Somatic alterations in DNA copy number have been well studied in numerous malignancies, yet the role of germline DNA copy number variation in cancer is still emerging. Genotyping microarrays generate allele-specific signal intensities to determine genotype, but may also be used to infer DNA copy number using additional computational approaches. Numerous tools have been developed to analyze Illumina genotype microarray data for copy number variant (CNV) discovery, although commonly utilized algorithms freely available to the public employ approaches based upon the use of hidden Markov models (HMMs). QuantiSNP, PennCNV, and GenoCN utilize HMMs with six copy number states but vary in how transition and emission probabilities are calculated. Performance of these CNV detection algorithms has been shown to be variable between both genotyping platforms and data sets, although HMM approaches generally outperform other current methods. Low sensitivity is prevalent with HMM-based algorithms, suggesting the need for continued improvement in CNV detection methodologies.

  5. Modeling haplotype block variation using Markov chains.

    PubMed

    Greenspan, G; Geiger, D

    2006-04-01

    Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity.

  6. Markov chains for testing redundant software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1988-01-01

    A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.

  7. Bayesian analysis of non-homogeneous Markov chains: application to mental health data.

    PubMed

    Sung, Minje; Soyer, Refik; Nhan, Nguyen

    2007-07-10

    In this paper we present a formal treatment of non-homogeneous Markov chains by introducing a hierarchical Bayesian framework. Our work is motivated by the analysis of correlated categorical data which arise in assessment of psychiatric treatment programs. In our development, we introduce a Markovian structure to describe the non-homogeneity of transition patterns. In doing so, we introduce a logistic regression set-up for Markov chains and incorporate covariates in our model. We present a Bayesian model using Markov chain Monte Carlo methods and develop inference procedures to address issues encountered in the analyses of data from psychiatric treatment programs. Our model and inference procedures are implemented to some real data from a psychiatric treatment study. Copyright 2006 John Wiley & Sons, Ltd.

  8. Modeling Haplotype Block Variation Using Markov Chains

    PubMed Central

    Greenspan, G.; Geiger, D.

    2006-01-01

    Models of background variation in genomic regions form the basis of linkage disequilibrium mapping methods. In this work we analyze a background model that groups SNPs into haplotype blocks and represents the dependencies between blocks by a Markov chain. We develop an error measure to compare the performance of this model against the common model that assumes that blocks are independent. By examining data from the International Haplotype Mapping project, we show how the Markov model over haplotype blocks is most accurate when representing blocks in strong linkage disequilibrium. This contrasts with the independent model, which is rendered less accurate by linkage disequilibrium. We provide a theoretical explanation for this surprising property of the Markov model and relate its behavior to allele diversity. PMID:16361244

  9. 75 FR 50991 - Antidumping Duty Order: Certain Woven Electric Blankets From the People's Republic of China

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ...: Certain Woven Electric Blankets From the People's Republic of China AGENCY: Import Administration... electric blankets (``woven electric blankets'') from the People's Republic of China (``PRC''). FOR FURTHER... Certain Woven Electric Blankets From the People's Republic of China: Final Determination of Sales at Less...

  10. Techniques for modeling the reliability of fault-tolerant systems with the Markov state-space approach

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1995-01-01

    This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.

  11. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    NASA Astrophysics Data System (ADS)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model's capability for simulating/predicting water resources.

  12. Comparison of Methods of Detection of Exceptional Sequences in Prokaryotic Genomes.

    PubMed

    Rusinov, I S; Ershova, A S; Karyagina, A S; Spirin, S A; Alexeevski, A V

    2018-02-01

    Many proteins need recognition of specific DNA sequences for functioning. The number of recognition sites and their distribution along the DNA might be of biological importance. For example, the number of restriction sites is often reduced in prokaryotic and phage genomes to decrease the probability of DNA cleavage by restriction endonucleases. We call a sequence an exceptional one if its frequency in a genome significantly differs from one predicted by some mathematical model. An exceptional sequence could be either under- or over-represented, depending on its frequency in comparison with the predicted one. Exceptional sequences could be considered biologically meaningful, for example, as targets of DNA-binding proteins or as parts of abundant repetitive elements. Several methods to predict frequency of a short sequence in a genome, based on actual frequencies of certain its subsequences, are used. The most popular are methods based on Markov chain models. But any rigorous comparison of the methods has not previously been performed. We compared three methods for the prediction of short sequence frequencies: the maximum-order Markov chain model-based method, the method that uses geometric mean of extended Markovian estimates, and the method that utilizes frequencies of all subsequences including discontiguous ones. We applied them to restriction sites in complete genomes of 2500 prokaryotic species and demonstrated that the results depend greatly on the method used: lists of 5% of the most under-represented sites differed by up to 50%. The method designed by Burge and coauthors in 1992, which utilizes all subsequences of the sequence, showed a higher precision than the other two methods both on prokaryotic genomes and randomly generated sequences after computational imitation of selective pressure. We propose this method as the first choice for detection of exceptional sequences in prokaryotic genomes.

  13. Can discrete event simulation be of use in modelling major depression?

    PubMed Central

    Le Lay, Agathe; Despiegel, Nicolas; François, Clément; Duru, Gérard

    2006-01-01

    Background Depression is among the major contributors to worldwide disease burden and adequate modelling requires a framework designed to depict real world disease progression as well as its economic implications as closely as possible. Objectives In light of the specific characteristics associated with depression (multiple episodes at varying intervals, impact of disease history on course of illness, sociodemographic factors), our aim was to clarify to what extent "Discrete Event Simulation" (DES) models provide methodological benefits in depicting disease evolution. Methods We conducted a comprehensive review of published Markov models in depression and identified potential limits to their methodology. A model based on DES principles was developed to investigate the benefits and drawbacks of this simulation method compared with Markov modelling techniques. Results The major drawback to Markov models is that they may not be suitable to tracking patients' disease history properly, unless the analyst defines multiple health states, which may lead to intractable situations. They are also too rigid to take into consideration multiple patient-specific sociodemographic characteristics in a single model. To do so would also require defining multiple health states which would render the analysis entirely too complex. We show that DES resolve these weaknesses and that its flexibility allow patients with differing attributes to move from one event to another in sequential order while simultaneously taking into account important risk factors such as age, gender, disease history and patients attitude towards treatment, together with any disease-related events (adverse events, suicide attempt etc.). Conclusion DES modelling appears to be an accurate, flexible and comprehensive means of depicting disease progression compared with conventional simulation methodologies. Its use in analysing recurrent and chronic diseases appears particularly useful compared with Markov processes. PMID:17147790

  14. Cryogenic Testing of Different Seam Concepts for Multilayer Insulation Systems

    NASA Technical Reports Server (NTRS)

    Johnson, Wesley L.; Fesmire, J. E.

    2009-01-01

    Recent testing in a cylindrical, comparative cryostat at the Cryogenics Test Laboratory has focused on various seam concepts for multilayer insulation systems. Three main types of seams were investigated: straight overlap, fold-over, and roll wrapped. Each blanket was comprised of 40 layer pairs of reflector and spacer materials. The total thickness was approximately 12.5-mm, giving an average layer density of 32 layers per centimeter. The blankets were tested at high vacuum, soft vacuum, and no vacuum using liquid nitrogen to maintain the cold boundary temperature at 77 K. Test results show that all three seam concepts are all close in thermal performance; however the fold-over method provides the lowest heat flux. For the first series of tests, seams were located 120 degrees around the circumference of the cryostat from the previous seam. This technique appears to have lessened the degradation of the blanket due to the seams. In a follow-on test, a 20 layer blanket was tested in a roll wrapped configuration and then cut down the side of the cylinder, taped together, and re-tested. This test result shows the thermal performance impact of having the seams all in one location versus having the seams clocked around the vessel. This experimental investigation indicates that the method of joining the seams in multilayer insulation systems is not as critical as the quality of the installation process.

  15. Design, optimization, and analysis of a self-deploying PV tent array

    NASA Astrophysics Data System (ADS)

    Collozza, Anthony J.

    1991-06-01

    A tent shaped PV array was designed and the design was optimized for maximum specific power. In order to minimize output power variation a tent angle of 60 deg was chosen. Based on the chosen tent angle an array structure was designed. The design considerations were minimal deployment time, high reliability, and small stowage volume. To meet these considerations the array was chosen to be self-deployable, form a compact storage configuration, using a passive pressurized gas deployment mechanism. Each structural component of the design was analyzed to determine the size necessary to withstand the various forces to which it would be subjected. Through this analysis the component weights were determined. An optimization was performed to determine the array dimensions and blanket geometry which produce the maximum specific power for a given PV blanket. This optimization was performed for both lunar and Martian environmental conditions. Other factors such as PV blanket types, structural material, and wind velocity (for Mars array), were varied to determine what influence they had on the design point. The performance specifications for the array at both locations and with each type of PV blanket were determined. These specifications were calculated using the Arimid fiber composite as the structural material. The four PV blanket types considered were silicon, GaAs/Ge, GaAsCLEFT, and amorphous silicon. The specifications used for each blanket represented either present day or near term technology. For both the Moon and Mars the amorphous silicon arrays produced the highest specific power.

  16. Free energies from dynamic weighted histogram analysis using unbiased Markov state model.

    PubMed

    Rosta, Edina; Hummer, Gerhard

    2015-01-13

    The weighted histogram analysis method (WHAM) is widely used to obtain accurate free energies from biased molecular simulations. However, WHAM free energies can exhibit significant errors if some of the biasing windows are not fully equilibrated. To account for the lack of full equilibration, we develop the dynamic histogram analysis method (DHAM). DHAM uses a global Markov state model to obtain the free energy along the reaction coordinate. A maximum likelihood estimate of the Markov transition matrix is constructed by joint unbiasing of the transition counts from multiple umbrella-sampling simulations along discretized reaction coordinates. The free energy profile is the stationary distribution of the resulting Markov matrix. For this matrix, we derive an explicit approximation that does not require the usual iterative solution of WHAM. We apply DHAM to model systems, a chemical reaction in water treated using quantum-mechanics/molecular-mechanics (QM/MM) simulations, and the Na(+) ion passage through the membrane-embedded ion channel GLIC. We find that DHAM gives accurate free energies even in cases where WHAM fails. In addition, DHAM provides kinetic information, which we here use to assess the extent of convergence in each of the simulation windows. DHAM may also prove useful in the construction of Markov state models from biased simulations in phase-space regions with otherwise low population.

  17. Parsing Social Network Survey Data from Hidden Populations Using Stochastic Context-Free Grammars

    PubMed Central

    Poon, Art F. Y.; Brouwer, Kimberly C.; Strathdee, Steffanie A.; Firestone-Cruz, Michelle; Lozada, Remedios M.; Kosakovsky Pond, Sergei L.; Heckathorn, Douglas D.; Frost, Simon D. W.

    2009-01-01

    Background Human populations are structured by social networks, in which individuals tend to form relationships based on shared attributes. Certain attributes that are ambiguous, stigmatized or illegal can create a ÔhiddenÕ population, so-called because its members are difficult to identify. Many hidden populations are also at an elevated risk of exposure to infectious diseases. Consequently, public health agencies are presently adopting modern survey techniques that traverse social networks in hidden populations by soliciting individuals to recruit their peers, e.g., respondent-driven sampling (RDS). The concomitant accumulation of network-based epidemiological data, however, is rapidly outpacing the development of computational methods for analysis. Moreover, current analytical models rely on unrealistic assumptions, e.g., that the traversal of social networks can be modeled by a Markov chain rather than a branching process. Methodology/Principal Findings Here, we develop a new methodology based on stochastic context-free grammars (SCFGs), which are well-suited to modeling tree-like structure of the RDS recruitment process. We apply this methodology to an RDS case study of injection drug users (IDUs) in Tijuana, México, a hidden population at high risk of blood-borne and sexually-transmitted infections (i.e., HIV, hepatitis C virus, syphilis). Survey data were encoded as text strings that were parsed using our custom implementation of the inside-outside algorithm in a publicly-available software package (HyPhy), which uses either expectation maximization or direct optimization methods and permits constraints on model parameters for hypothesis testing. We identified significant latent variability in the recruitment process that violates assumptions of Markov chain-based methods for RDS analysis: firstly, IDUs tended to emulate the recruitment behavior of their own recruiter; and secondly, the recruitment of like peers (homophily) was dependent on the number of recruits. Conclusions SCFGs provide a rich probabilistic language that can articulate complex latent structure in survey data derived from the traversal of social networks. Such structure that has no representation in Markov chain-based models can interfere with the estimation of the composition of hidden populations if left unaccounted for, raising critical implications for the prevention and control of infectious disease epidemics. PMID:19738904

  18. Thermochemical hydrogen production based on magnetic fusion

    NASA Astrophysics Data System (ADS)

    Krikorian, O. H.; Brown, L. C.

    Preliminary results of a DoE study to define the configuration and production costs for a Tandem Mirror Reactor (TMR) heat source H2 fuel production plant are presented. The TMR uses the D-T reaction to produce thermal energy and dc electrical current, with an Li blanket employed to breed more H-3 for fuel. Various blanket designs are being considered, and the coupling of two of them, a heat pipe blanket to a Joule-boosted decomposer, and a two-temperature zone blanket to a fluidized bed decomposer, are discussed. The thermal energy would be used in an H2SO4 thermochemical cycler to produce the H2. The Joule-boosted decomposer, involving the use of electrically heated commercial SiC furnace elements to transfer process heat to the thermochemical H2 cycle, is found to yield H2 fuel at a cost of $12-14/GJ, which is the projected cost of fossil fuels in 30-40 yr, when the TMR H2 production facility would be operable.

  19. Thermal Hydraulic Design and Analysis of a Water-Cooled Ceramic Breeder Blanket with Superheated Steam for CFETR

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaoman; Ma, Xuebin; Jiang, Kecheng; Chen, Lei; Huang, Kai; Liu, Songlin

    2015-09-01

    The water-cooled ceramic breeder blanket (WCCB) is one of the blanket candidates for China fusion engineering test reactor (CFETR). In order to improve power generation efficiency and tritium breeding ratio, WCCB with superheated steam is under development. The thermal-hydraulic design is the key to achieve the purpose of safe heat removal and efficient power generation under normal and partial loading operation conditions. In this paper, the coolant flow scheme was designed and one self-developed analytical program was developed, based on a theoretical heat transfer model and empirical correlations. Employing this program, the design and analysis of related thermal-hydraulic parameters were performed under different fusion power conditions. The results indicated that the superheated steam water-cooled blanket is feasible. supported by the National Special Project for Magnetic Confined Nuclear Fusion Energy of China (Nos. 2013GB108004, 2014GB122000 and 2014GB119000), and National Natural Science Foundation of China (No. 11175207)

  20. 75 FR 46911 - Certain Woven Electric Blankets from the People's Republic of China: Amended Final Determination...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-04

    ... Blankets from the People's Republic of China: Amended Final Determination of Sales at Less Than Fair Value... than fair value (``LTFV'') in the antidumping investigation of certain woven electric blankets (``woven electric blankets'') from the People's Republic of China (``PRC''). See Certain Woven Electric Blankets...

  1. Analysing grouping of nucleotides in DNA sequences using lumped processes constructed from Markov chains.

    PubMed

    Guédon, Yann; d'Aubenton-Carafa, Yves; Thermes, Claude

    2006-03-01

    The most commonly used models for analysing local dependencies in DNA sequences are (high-order) Markov chains. Incorporating knowledge relative to the possible grouping of the nucleotides enables to define dedicated sub-classes of Markov chains. The problem of formulating lumpability hypotheses for a Markov chain is therefore addressed. In the classical approach to lumpability, this problem can be formulated as the determination of an appropriate state space (smaller than the original state space) such that the lumped chain defined on this state space retains the Markov property. We propose a different perspective on lumpability where the state space is fixed and the partitioning of this state space is represented by a one-to-many probabilistic function within a two-level stochastic process. Three nested classes of lumped processes can be defined in this way as sub-classes of first-order Markov chains. These lumped processes enable parsimonious reparameterizations of Markov chains that help to reveal relevant partitions of the state space. Characterizations of the lumped processes on the original transition probability matrix are derived. Different model selection methods relying either on hypothesis testing or on penalized log-likelihood criteria are presented as well as extensions to lumped processes constructed from high-order Markov chains. The relevance of the proposed approach to lumpability is illustrated by the analysis of DNA sequences. In particular, the use of lumped processes enables to highlight differences between intronic sequences and gene untranslated region sequences.

  2. Network Security Risk Assessment System Based on Attack Graph and Markov Chain

    NASA Astrophysics Data System (ADS)

    Sun, Fuxiong; Pi, Juntao; Lv, Jin; Cao, Tian

    2017-10-01

    Network security risk assessment technology can be found in advance of the network problems and related vulnerabilities, it has become an important means to solve the problem of network security. Based on attack graph and Markov chain, this paper provides a Network Security Risk Assessment Model (NSRAM). Based on the network infiltration tests, NSRAM generates the attack graph by the breadth traversal algorithm. Combines with the international standard CVSS, the attack probability of atomic nodes are counted, and then the attack transition probabilities of ones are calculated by Markov chain. NSRAM selects the optimal attack path after comprehensive measurement to assessment network security risk. The simulation results show that NSRAM can reflect the actual situation of network security objectively.

  3. Conceptual design of fast-ignition laser fusion reactor FALCON-D

    NASA Astrophysics Data System (ADS)

    Goto, T.; Someya, Y.; Ogawa, Y.; Hiwatari, R.; Asaoka, Y.; Okano, K.; Sunahara, A.; Johzaki, T.

    2009-07-01

    A new conceptual design of the laser fusion power plant FALCON-D (Fast-ignition Advanced Laser fusion reactor CONcept with a Dry wall chamber) has been proposed. The fast-ignition method can achieve sufficient fusion gain for a commercial operation (~100) with about 10 times smaller fusion yield than the conventional central ignition method. FALCON-D makes full use of this property and aims at designing with a compact dry wall chamber (5-6 m radius). 1D/2D simulations by hydrodynamic codes showed a possibility of achieving sufficient gain with a laser energy of 400 kJ, i.e. a 40 MJ target yield. The design feasibility of the compact dry wall chamber and the solid breeder blanket system was shown through thermomechanical analysis of the dry wall and neutronics analysis of the blanket system. Moderate electric output (~400 MWe) can be achieved with a high repetition (30 Hz) laser. This dry wall reactor concept not only reduces several difficulties associated with a liquid wall system but also enables a simple cask maintenance method for the replacement of the blanket system, which can shorten the maintenance period. The basic idea of the maintenance method for the final optics system has also been proposed. Some critical R&D issues required for this design are also discussed.

  4. Mori-Zwanzig theory for dissipative forces in coarse-grained dynamics in the Markov limit

    NASA Astrophysics Data System (ADS)

    Izvekov, Sergei

    2017-01-01

    We derive alternative Markov approximations for the projected (stochastic) force and memory function in the coarse-grained (CG) generalized Langevin equation, which describes the time evolution of the center-of-mass coordinates of clusters of particles in the microscopic ensemble. This is done with the aid of the Mori-Zwanzig projection operator method based on the recently introduced projection operator [S. Izvekov, J. Chem. Phys. 138, 134106 (2013), 10.1063/1.4795091]. The derivation exploits the "generalized additive fluctuating force" representation to which the projected force reduces in the adopted projection operator formalism. For the projected force, we present a first-order time expansion which correctly extends the static fluctuating force ansatz with the terms necessary to maintain the required orthogonality of the projected dynamics in the Markov limit to the space of CG phase variables. The approximant of the memory function correctly accounts for the momentum dependence in the lowest (second) order and indicates that such a dependence may be important in the CG dynamics approaching the Markov limit. In the case of CG dynamics with a weak dependence of the memory effects on the particle momenta, the expression for the memory function presented in this work is applicable to non-Markov systems. The approximations are formulated in a propagator-free form allowing their efficient evaluation from the microscopic data sampled by standard molecular dynamics simulations. A numerical application is presented for a molecular liquid (nitromethane). With our formalism we do not observe the "plateau-value problem" if the friction tensors for dissipative particle dynamics (DPD) are computed using the Green-Kubo relation. Our formalism provides a consistent bottom-up route for hierarchical parametrization of DPD models from atomistic simulations.

  5. Structure-based Markov random field model for representing evolutionary constraints on functional sites.

    PubMed

    Jeong, Chan-Seok; Kim, Dongsup

    2016-02-24

    Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.

  6. An investigation into the feasibility of thorium fuels utilization in seed-blanket configurations for TRIGA PUSPATI Reactor (RTP)

    NASA Astrophysics Data System (ADS)

    Damahuri, Abdul Hannan Bin; Mohamed, Hassan; Aziz Mohamed, Abdul; Idris, Faridah

    2018-01-01

    Thorium is one of the elements that needs to be explored for nuclear fuel research and development. One of the popular core configurations of thorium fuel is seed-blanket configuration or also known as Radkowsky Thorium Fuel concept. The seed will act as a supplier of neutrons, which will be placed inside of the core. The blanket, on the other hand, is the consumer of neutrons that is located at outermost of the core. In this work, a neutronic analysis of seed-blanket configuration for the TRIGA PUSPATI Reactor (RTP) is carried out using Monte Carlo method. The reactor, which has been operated since 1982 use uranium zirconium hydride (U-ZrH1.6) as the fuel and have multiple uranium weight which are 8.5, 12 and 20 wt.%. The pool type reactor is one and only research reactor that located in Malaysia. The design of core included the Uranium Zirconium Hydride located at the centre of the core that will act as the seed to supply neutron. The thorium oxide that will act as blanket situated outside of seed region will receive neutron to transmute 232Th to 233U. The neutron multiplication factor or criticality of each configuration is estimated. Results show that the highest initial criticality achieved is 1.30153.

  7. Free-vibration characteristics of a large split-blanket solar array in a 1-g field

    NASA Technical Reports Server (NTRS)

    Shaker, F. J.

    1976-01-01

    Two methods for studying the free vibration characteristics of a large split blanket solar array in both a 0-g and a 1-g cantilevered configuration are presented. The 0-g configuration corresponds to an in-orbit configuration of the array; the 1-g configuration is a typical ground test configuration. The first method applies the equations of continuum mechanics to determine the mode shapes and frequencies of the array; the second method uses the Rayleigh-Ritz approach. In the Rayleigh-Ritz method the array displacements are represented by string modes and cantilevered beam modes. The results of this investigation are summarized by a series of graphs illustrating the effects of various array parameters on the mode shapes and frequencies of the system. The results of the two methods are also compared in tabular form.

  8. Effect of processing parameters and pore structure of nanostructured silica aerogel on the physical properties of aerogel blankets

    NASA Astrophysics Data System (ADS)

    Latifi, Fatemeh; Talebi, Zahra; Khalili, Haleh; Zarrebini, Mohammad

    2018-05-01

    This work investigates the influence of processing parameters and aerogel pore structure on the physical properties and hydrophobicity of aerogel blankets. Aerogel blankets were produced by in situ synthesis of nanostructured silica aerogel on a polyester nonwoven substrate. Nitrogen adsorption-desorption analysis, contact angle test and FE-SEM images were used to characterize both the aerogel particles and the blankets. The results showed that the weight and thickness of the blanket were reduced when the low amount of catalyst was used. A decrease in the aerogel pore size from 22 to 11 nm increased the weight and thickness of the blankets. The xerogel particles with high density and pore size of 5 nm reduced the blanket weight. Also, the blanket weight and thickness were increased due to increasing the sol volume. It was found that the hydrophobicity of aerogel blankets is not influenced by sol volume and pore structure of silica aerogel.

  9. Fusion reactor blanket/shield design study

    NASA Astrophysics Data System (ADS)

    Smith, D. L.; Clemmer, R. G.; Harkness, S. D.; Jung, J.; Krazinski, J. L.; Mattas, R. F.; Stevens, H. C.; Youngdahl, C. K.; Trachsel, C.; Bowers, D.

    1979-07-01

    A joint study of Tokamak reactor first wall/blanket/shield technology was conducted to identify key technological limitations for various tritium breeding blanket design concepts, establishment of a basis for assessment and comparison of the design features of each concept, and development of optimized blanket designs. The approach used involved a review of previously proposed blanket designs, analysis of critical technological problems and design features associated with each of the blanket concepts, and a detailed evaluation of the most tractable design concepts. Tritium breeding blanket concepts were evaluated according to the proposed coolant. The effort concentrated on evaluation of lithium and water cooled blanket designs and helium and molten salt cooled designs. Generalized nuclear analysis of the tritium breeding performance, an analysis of tritium breeding requirements, and a first wall stress analysis were conducted as part of the study. The impact of coolant selection on the mechanical design of a Tokamak reactor was evaluated. Reference blanket designs utilizing the four candidate coolants are presented.

  10. Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts.

    PubMed

    Jia, Chen

    2017-09-01

    Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.

  11. Simplification of Markov chains with infinite state space and the mathematical theory of random gene expression bursts

    NASA Astrophysics Data System (ADS)

    Jia, Chen

    2017-09-01

    Here we develop an effective approach to simplify two-time-scale Markov chains with infinite state spaces by removal of states with fast leaving rates, which improves the simplification method of finite Markov chains. We introduce the concept of fast transition paths and show that the effective transitions of the reduced chain can be represented as the superposition of the direct transitions and the indirect transitions via all the fast transition paths. Furthermore, we apply our simplification approach to the standard Markov model of single-cell stochastic gene expression and provide a mathematical theory of random gene expression bursts. We give the precise mathematical conditions for the bursting kinetics of both mRNAs and proteins. It turns out that random bursts exactly correspond to the fast transition paths of the Markov model. This helps us gain a better understanding of the physics behind the bursting kinetics as an emergent behavior from the fundamental multiscale biochemical reaction kinetics of stochastic gene expression.

  12. Finding exact constants in a Markov model of Zipfs law generation

    NASA Astrophysics Data System (ADS)

    Bochkarev, V. V.; Lerner, E. Yu.; Nikiforov, A. A.; Pismenskiy, A. A.

    2017-12-01

    According to the classical Zipfs law, the word frequency is a power function of the word rank with an exponent -1. The objective of this work is to find multiplicative constant in a Markov model of word generation. Previously, the case of independent letters was mathematically strictly investigated in [Bochkarev V V and Lerner E Yu 2017 International Journal of Mathematics and Mathematical Sciences Article ID 914374]. Unfortunately, the methods used in this paper cannot be generalized in case of Markov chains. The search of the correct formulation of the Markov generalization of this results was performed using experiments with different ergodic matrices of transition probability P. Combinatory technique allowed taking into account all the words with probability of more than e -300 in case of 2 by 2 matrices. It was experimentally proved that the required constant in the limit is equal to the value reciprocal to conditional entropy of matrix row P with weights presenting the elements of the vector π of the stationary distribution of the Markov chain.

  13. PEP solar array definition study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The conceptual design of a large, flexible, lightweight solar array is presented focusing on a solar array overview assessment, solar array blanket definition, structural-mechanical systems definition, and launch/reentry blanket protection features. The overview assessment includes a requirements and constraints review, the thermal environment assessment on the design selection, an evaluation of blanket integration sequence, a conceptual blanket/harness design, and a hot spot analysis considering the effects of shadowing and cell failures on overall array reliability. The solar array blanket definition includes the substrate design, hinge designs and blanket/harness flexibility assessment. The structural/mechanical systems definition includes an overall loads and deflection assessment, a frequency analysis of the deployed assembly, a components weights estimate, design of the blanket housing and tensioning mechanism. The launch/reentry blanket protection task includes assessment of solar cell/cover glass cushioning concepts during ascent and reentry flight condition.

  14. Lightweight solar array blanket tooling, laser welding and cover process technology

    NASA Technical Reports Server (NTRS)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  15. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    NASA Astrophysics Data System (ADS)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  16. Geometrically Constructed Markov Chain Monte Carlo Study of Quantum Spin-phonon Complex Systems

    NASA Astrophysics Data System (ADS)

    Suwa, Hidemaro

    2013-03-01

    We have developed novel Monte Carlo methods for precisely calculating quantum spin-boson models and investigated the critical phenomena of the spin-Peierls systems. Three significant methods are presented. The first is a new optimization algorithm of the Markov chain transition kernel based on the geometric weight allocation. This algorithm, for the first time, satisfies the total balance generally without imposing the detailed balance and always minimizes the average rejection rate, being better than the Metropolis algorithm. The second is the extension of the worm (directed-loop) algorithm to non-conserved particles, which cannot be treated efficiently by the conventional methods. The third is the combination with the level spectroscopy. Proposing a new gap estimator, we are successful in eliminating the systematic error of the conventional moment method. Then we have elucidated the phase diagram and the universality class of the one-dimensional XXZ spin-Peierls system. The criticality is totally consistent with the J1 -J2 model, an effective model in the antiadiabatic limit. Through this research, we have succeeded in investigating the critical phenomena of the effectively frustrated quantum spin system by the quantum Monte Carlo method without the negative sign. JSPS Postdoctoral Fellow for Research Abroad

  17. Using Markov state models to study self-assembly

    NASA Astrophysics Data System (ADS)

    Perkett, Matthew R.; Hagan, Michael F.

    2014-06-01

    Markov state models (MSMs) have been demonstrated to be a powerful method for computationally studying intramolecular processes such as protein folding and macromolecular conformational changes. In this article, we present a new approach to construct MSMs that is applicable to modeling a broad class of multi-molecular assembly reactions. Distinct structures formed during assembly are distinguished by their undirected graphs, which are defined by strong subunit interactions. Spatial inhomogeneities of free subunits are accounted for using a recently developed Gaussian-based signature. Simplifications to this state identification are also investigated. The feasibility of this approach is demonstrated on two different coarse-grained models for virus self-assembly. We find good agreement between the dynamics predicted by the MSMs and long, unbiased simulations, and that the MSMs can reduce overall simulation time by orders of magnitude.

  18. Short-term droughts forecast using Markov chain model in Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Rahmat, Siti Nazahiyah; Jayasuriya, Niranjali; Bhuiyan, Muhammed A.

    2017-07-01

    A comprehensive risk management strategy for dealing with drought should include both short-term and long-term planning. The objective of this paper is to present an early warning method to forecast drought using the Standardised Precipitation Index (SPI) and a non-homogeneous Markov chain model. A model such as this is useful for short-term planning. The developed method has been used to forecast droughts at a number of meteorological monitoring stations that have been regionalised into six (6) homogenous clusters with similar drought characteristics based on SPI. The non-homogeneous Markov chain model was used to estimate drought probabilities and drought predictions up to 3 months ahead. The drought severity classes defined using the SPI were computed at a 12-month time scale. The drought probabilities and the predictions were computed for six clusters that depict similar drought characteristics in Victoria, Australia. Overall, the drought severity class predicted was quite similar for all the clusters, with the non-drought class probabilities ranging from 49 to 57 %. For all clusters, the near normal class had a probability of occurrence varying from 27 to 38 %. For the more moderate and severe classes, the probabilities ranged from 2 to 13 % and 3 to 1 %, respectively. The developed model predicted drought situations 1 month ahead reasonably well. However, 2 and 3 months ahead predictions should be used with caution until the models are developed further.

  19. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  20. Space-Spurred Metallized Materials

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Among a score of applications for a space spinoff reflective material called TXG is the emergency blanket manufactured by Metallized Products, Inc. Used by ski patrol to protect a skier shaken by a fall, the blanket retains up to 80% of user's body heat preventing post accident shock or chills. Carried by many types of emergency teams, blanket is large when unfolded, but folds into a package no larger than a deck of cards. Many other uses include, emergency blankets, all weather blanket, tanning blanket, window shields, radar reflector life raft canopies, etc.

  1. Discovering collectively informative descriptors from high-throughput experiments

    PubMed Central

    2009-01-01

    Background Improvements in high-throughput technology and its increasing use have led to the generation of many highly complex datasets that often address similar biological questions. Combining information from these studies can increase the reliability and generalizability of results and also yield new insights that guide future research. Results This paper describes a novel algorithm called BLANKET for symmetric analysis of two experiments that assess informativeness of descriptors. The experiments are required to be related only in that their descriptor sets intersect substantially and their definitions of case and control are consistent. From resulting lists of n descriptors ranked by informativeness, BLANKET determines shortlists of descriptors from each experiment, generally of different lengths p and q. For any pair of shortlists, four numbers are evident: the number of descriptors appearing in both shortlists, in exactly one shortlist, or in neither shortlist. From the associated contingency table, BLANKET computes Right Fisher Exact Test (RFET) values used as scores over a plane of possible pairs of shortlist lengths [1,2]. BLANKET then chooses a pair or pairs with RFET score less than a threshold; the threshold depends upon n and shortlist length limits and represents a quality of intersection achieved by less than 5% of random lists. Conclusions Researchers seek within a universe of descriptors some minimal subset that collectively and efficiently predicts experimental outcomes. Ideally, any smaller subset should be insufficient for reliable prediction and any larger subset should have little additional accuracy. As a method, BLANKET is easy to conceptualize and presents only moderate computational complexity. Many existing databases could be mined using BLANKET to suggest optimal sets of predictive descriptors. PMID:20021653

  2. Detecting complexes from edge-weighted PPI networks via genes expression analysis.

    PubMed

    Zhang, Zehua; Song, Jian; Tang, Jijun; Xu, Xinying; Guo, Fei

    2018-04-24

    Identifying complexes from PPI networks has become a key problem to elucidate protein functions and identify signal and biological processes in a cell. Proteins binding as complexes are important roles of life activity. Accurate determination of complexes in PPI networks is crucial for understanding principles of cellular organization. We propose a novel method to identify complexes on PPI networks, based on different co-expression information. First, we use Markov Cluster Algorithm with an edge-weighting scheme to calculate complexes on PPI networks. Then, we propose some significant features, such as graph information and gene expression analysis, to filter and modify complexes predicted by Markov Cluster Algorithm. To evaluate our method, we test on two experimental yeast PPI networks. On DIP network, our method has Precision and F-Measure values of 0.6004 and 0.5528. On MIPS network, our method has F-Measure and S n values of 0.3774 and 0.3453. Comparing to existing methods, our method improves Precision value by at least 0.1752, F-Measure value by at least 0.0448, S n value by at least 0.0771. Experiments show that our method achieves better results than some state-of-the-art methods for identifying complexes on PPI networks, with the prediction quality improved in terms of evaluation criteria.

  3. a Probability Model for Drought Prediction Using Fusion of Markov Chain and SAX Methods

    NASA Astrophysics Data System (ADS)

    Jouybari-Moghaddam, Y.; Saradjian, M. R.; Forati, A. M.

    2017-09-01

    Drought is one of the most powerful natural disasters which are affected on different aspects of the environment. Most of the time this phenomenon is immense in the arid and semi-arid area. Monitoring and prediction the severity of the drought can be useful in the management of the natural disaster caused by drought. Many indices were used in predicting droughts such as SPI, VCI, and TVX. In this paper, based on three data sets (rainfall, NDVI, and land surface temperature) which are acquired from MODIS satellite imagery, time series of SPI, VCI, and TVX in time limited between winters 2000 to summer 2015 for the east region of Isfahan province were created. Using these indices and fusion of symbolic aggregation approximation and hidden Markov chain drought was predicted for fall 2015. For this purpose, at first, each time series was transformed into the set of quality data based on the state of drought (5 group) by using SAX algorithm then the probability matrix for the future state was created by using Markov hidden chain. The fall drought severity was predicted by fusion the probability matrix and state of drought severity in summer 2015. The prediction based on the likelihood for each state of drought includes severe drought, middle drought, normal drought, severe wet and middle wet. The analysis and experimental result from proposed algorithm show that the product of this algorithm is acceptable and the proposed algorithm is appropriate and efficient for predicting drought using remote sensor data.

  4. Evaluation of Usability Utilizing Markov Models

    ERIC Educational Resources Information Center

    Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane

    2012-01-01

    Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…

  5. UMAP Modules-Units 105, 107-109, 111-112, 158-162.

    ERIC Educational Resources Information Center

    Keller, Mary K.; And Others

    This collection of materials includes six units dealing with applications of matrix methods. These are: 105-Food Service Management; 107-Markov Chains; 108-Electrical Circuits; 109-Food Service and Dietary Requirements; 111-Fixed Point and Absorbing Markov Chains; and 112-Analysis of Linear Circuits. The units contain exercises and model exams,…

  6. A photovoltaic catenary-tent array for the Martian surface

    NASA Technical Reports Server (NTRS)

    Crutchik, M.; Colozza, Anthony J.; Appelbaum, J.

    1993-01-01

    To provide electrical power during an exploration mission to Mars, a deployable tent-shaped structure with a flexible photovoltaic (PV) blanket is proposed. The array is designed with a self-deploying mechanism utilizing pressurized gas expansion. The structural design for the array uses a combination of cables, beams, and columns to support and deploy the PV blanket. Under the force of gravity a cable carrying a uniform load will take the shape of a catenary curve. A catenary-tent collector is self shadowing which must be taken into account in the solar radiation calculation. The shape and the area of the shadow on the array was calculated and used in the determination of the global radiation on the array. The PV blanket shape and structure dimension were optimized to achieve a configuration which maximizes the specific power (W/kg). The optimization was performed for four types of PV blankets (Si, GaAs/Ge, GaAs CLEFT, and amorphous Si) and four types of structure materials (Carbon composite, Aramid Fiber composite, Aluminum, and Magnesium). The results show that the catenary shape of the PV blanket, which produces the highest specific power, corresponds to zero end angle at the base with respect to the horizontal. The tent angle is determined by the combined effect of the array structure specific mass and the PV blanket output power. The combination of carbon composite structural material and GaAs CLEFT solar cells produce the highest specific power. The study was carried out for two sites on Mars corresponding to the Viking Lander locations. The designs were also compared for summer, winter, and yearly operation.

  7. A scale-invariant change detection method for land use/cover change research

    NASA Astrophysics Data System (ADS)

    Xing, Jin; Sieber, Renee; Caelli, Terrence

    2018-07-01

    Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.

  8. Sequence similarity is more relevant than species specificity in probabilistic backtranslation.

    PubMed

    Ferro, Alfredo; Giugno, Rosalba; Pigola, Giuseppe; Pulvirenti, Alfredo; Di Pietro, Cinzia; Purrello, Michele; Ragusa, Marco

    2007-02-21

    Backtranslation is the process of decoding a sequence of amino acids into the corresponding codons. All synthetic gene design systems include a backtranslation module. The degeneracy of the genetic code makes backtranslation potentially ambiguous since most amino acids are encoded by multiple codons. The common approach to overcome this difficulty is based on imitation of codon usage within the target species. This paper describes EasyBack, a new parameter-free, fully-automated software for backtranslation using Hidden Markov Models. EasyBack is not based on imitation of codon usage within the target species, but instead uses a sequence-similarity criterion. The model is trained with a set of proteins with known cDNA coding sequences, constructed from the input protein by querying the NCBI databases with BLAST. Unlike existing software, the proposed method allows the quality of prediction to be estimated. When tested on a group of proteins that show different degrees of sequence conservation, EasyBack outperforms other published methods in terms of precision. The prediction quality of a protein backtranslation methis markedly increased by replacing the criterion of most used codon in the same species with a Hidden Markov Model trained with a set of most similar sequences from all species. Moreover, the proposed method allows the quality of prediction to be estimated probabilistically.

  9. Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems

    NASA Astrophysics Data System (ADS)

    Shen, Zhao-Li; Huang, Ting-Zhu; Carpentieri, Bruno; Wen, Chun; Gu, Xian-Ming

    2018-06-01

    Recently, the adaptive algebraic aggregation multigrid method has been proposed for computing stationary distributions of Markov chains. This method updates aggregates on every iterative cycle to keep high accuracies of coarse-level corrections. Accordingly, its fast convergence rate is well guaranteed, but often a large proportion of time is cost by aggregation processes. In this paper, we show that the aggregates on each level in this method can be utilized to transfer the probability equation of that level into a block linear system. Then we propose a Block-Jacobi relaxation that deals with the block system on each level to smooth error. Some theoretical analysis of this technique is presented, meanwhile it is also adapted to solve PageRank problems. The purpose of this technique is to accelerate the adaptive aggregation multigrid method and its variants for solving Markov chains and PageRank problems. It also attempts to shed some light on new solutions for making aggregation processes more cost-effective for aggregation multigrid methods. Numerical experiments are presented to illustrate the effectiveness of this technique.

  10. Bacterial genomes lacking long-range correlations may not be modeled by low-order Markov chains: the role of mixing statistics and frame shift of neighboring genes.

    PubMed

    Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian

    2014-12-01

    We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Overview of existing algorithms for emotion classification. Uncertainties in evaluations of accuracies.

    NASA Astrophysics Data System (ADS)

    Avetisyan, H.; Bruna, O.; Holub, J.

    2016-11-01

    A numerous techniques and algorithms are dedicated to extract emotions from input data. In our investigation it was stated that emotion-detection approaches can be classified into 3 following types: Keyword based / lexical-based, learning based, and hybrid. The most commonly used techniques, such as keyword-spotting method, Support Vector Machines, Naïve Bayes Classifier, Hidden Markov Model and hybrid algorithms, have impressive results in this sphere and can reach more than 90% determining accuracy.

  12. Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures

    NASA Astrophysics Data System (ADS)

    Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain

    2018-02-01

    Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.

  13. Driving down defect density in composite EUV patterning film stacks

    NASA Astrophysics Data System (ADS)

    Meli, Luciana; Petrillo, Karen; De Silva, Anuja; Arnold, John; Felix, Nelson; Johnson, Richard; Murray, Cody; Hubbard, Alex; Durrant, Danielle; Hontake, Koichi; Huli, Lior; Lemley, Corey; Hetzer, Dave; Kawakami, Shinichiro; Matsunaga, Koichi

    2017-03-01

    Extreme ultraviolet lithography (EUVL) technology is one of the leading candidates for enabling the next generation devices, for 7nm node and beyond. As the technology matures, further improvement is required in the area of blanket film defectivity, pattern defectivity, CD uniformity, and LWR/LER. As EUV pitch scaling approaches sub 20 nm, new techniques and methods must be developed to reduce the overall defectivity, mitigate pattern collapse and eliminate film related defect. IBM Corporation and Tokyo Electron Limited (TELTM) are continuously collaborating to develop manufacturing quality processes for EUVL. In this paper, we review key defectivity learning required to enable 7nm node and beyond technology. We will describe ongoing progress in addressing these challenges through track-based processes (coating, developer, baking), highlighting the limitations of common defect detection strategies and outlining methodologies necessary for accurate characterization and mitigation of blanket defectivity in EUV patterning stacks. We will further discuss defects related to pattern collapse and thinning of underlayer films.

  14. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

    PubMed

    Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping

    2014-01-01

    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  15. Gauge Measures Thicknesses Of Blankets

    NASA Technical Reports Server (NTRS)

    Hagen, George R.; Yoshino, Stanley Y.

    1991-01-01

    Tool makes highly repeatable measurements of thickness of penetrable blanket insulation. Includes commercial holder for replaceable knife blades, which holds needle instead of knife. Needle penetrates blanket to establish reference plane. Ballasted slider applies fixed preload to blanket. Technician reads thickness value on scale.

  16. Application of the Markov Chain Monte Carlo method for snow water equivalent retrieval based on passive microwave measurements

    NASA Astrophysics Data System (ADS)

    Pan, J.; Durand, M. T.; Vanderjagt, B. J.

    2015-12-01

    Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.

  17. Toughened Thermal Blanket for MMOD Protection

    NASA Technical Reports Server (NTRS)

    Christiansen, Eric L.; Lear, Dana M.

    2014-01-01

    Thermal blankets are used extensively on spacecraft to provide passive thermal control of spacecraft hardware from thermal extremes encountered in space. Toughened thermal blankets have been developed that greatly improve protection from hypervelocity micrometeoroid and orbital debris (MMOD) impacts. These blankets can be outfitted if so desired with a reliable means to determine the location, depth and extent of MMOD impact damage by incorporating an impact sensitive piezoelectric film. Improved MMOD protection of thermal blankets was obtained by adding selective materials at various locations within the thermal blanket. As given in Figure 1, three types of materials were added to the thermal blanket to enhance its MMOD performance: (1) disrupter layers, near the outside of the blanket to improve breakup of the projectile, (2) standoff layers, in the middle of the blanket to provide an area or gap that the broken-up projectile can expand, and (3) stopper layers, near the back of the blanket where the projectile debris is captured and stopped. The best suited materials for these different layers vary. Density and thickness is important for the disrupter layer (higher densities generally result in better projectile breakup), whereas a highstrength to weight ratio is useful for the stopper layer, to improve the slowing and capture of debris particles.

  18. The micrometeoroid complex and evolution of the lunar regolith

    NASA Technical Reports Server (NTRS)

    Horz, F.; Morrison, D. A.; Gault, D. E.; Oberbeck, V. R.; Quaide, W. L.; Vedder, J. F.; Brownlee, D. E.; Hartung, J. B.

    1977-01-01

    Monte Carlo-based computer calculations, as well as analytical approaches utilizing probabilistic arguments, were applied to gain insight into the principal regolith impact processes and their resulting kinetics. Craters 10 to 1500 m in diameter are largely responsible for the overall growth of the regolith. As a consequence the regolith has to be envisioned as a complex sequence of discrete ejecta blankets. Such blankets constitute first-order discontinuities in the evolving debris layer. The micrometeoroid complex then operates intensely on these fresh ejecta blankets and accomplishes only in an uppermost layer of approximately 1-mm thickness. The absolute flux of micrometeoroids based on lunar rock analyses averaged over the past few 10 to the 6th power years is approximately an order of magnitude lower than presentday satellite fluxes; however, there is indication that the flux increased in the past 10 to the 4th power years to become compatible with the satellite data. Furthermore, there is detailed evidence that the micrometeoroid complex existed throughout geologic time.

  19. Non-LTE model calculations for SN 1987A and the extragalactic distance scale

    NASA Technical Reports Server (NTRS)

    Schmutz, W.; Abbott, D. C.; Russell, R. S.; Hamann, W.-R.; Wessolowski, U.

    1990-01-01

    This paper presents model atmospheres for the first week of SN 1987A, based on the luminosity and density/velocity structure from hydrodynamic models of Woosley (1988). The models account for line blanketing, expansion, sphericity, and departures from LTE in hydrogen and helium and differ from previously published efforts because they represent ab initio calculations, i.e., they contain essentially no free parameters. The formation of the UV spectrum is dominated by the effects of line blanketing. In the absorption troughs, the Balmer line profiles were fit well by these models, but the observed emissions are significantly stronger than predicted, perhaps due to clumping. The generally good agreement between the present synthetic spectra and observations provides independent support for the overall accuracy of the hydrodynamic models of Woosley. The question of the accuracy of the Baade-Wesselink method is addressed in a detailed discussion of its approximations. While the application of the standard method produces a distance within an uncertainty of 20 percent in the case of SN 1987A, systematic errors up to a factor of 2 are possible, particularly if the precursor was a red supergiant.

  20. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains.

    PubMed

    Bulashevska, Alla; Eils, Roland

    2006-06-14

    The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  1. A novel approach to spacecraft re-entry and recovery

    NASA Astrophysics Data System (ADS)

    Patten, Richard; Hedgecock, Judson C.

    1990-01-01

    A deployable radiative heat shield design for spacecraft reentry is discussed. The design would allow the spacecraft to be cylindrical instead of the the traditional conical shape, providing a greater internal volume and thus enhancing mission capabilities. The heat shield uses a flexible thermal blanket material which is deployed in a manner similar to an umbrella. Based on the radiative properties of this blanket material, heating constraints have been established which allow a descent trajectory to be designed. The heat shield and capsule configuration are analyzed for resistance to heat flux and aerodynamic stability based on reentry trajectory. Experimental tests are proposed.

  2. Counting of oligomers in sequences generated by markov chains for DNA motif discovery.

    PubMed

    Shan, Gao; Zheng, Wei-Mou

    2009-02-01

    By means of the technique of the imbedded Markov chain, an efficient algorithm is proposed to exactly calculate first, second moments of word counts and the probability for a word to occur at least once in random texts generated by a Markov chain. A generating function is introduced directly from the imbedded Markov chain to derive asymptotic approximations for the problem. Two Z-scores, one based on the number of sequences with hits and the other on the total number of word hits in a set of sequences, are examined for discovery of motifs on a set of promoter sequences extracted from A. thaliana genome. Source code is available at http://www.itp.ac.cn/zheng/oligo.c.

  3. Lunar regolith stratigraphy analysis based on the simulation of lunar penetrating radar signals

    NASA Astrophysics Data System (ADS)

    Lai, Jialong; Xu, Yi; Zhang, Xiaoping; Tang, Zesheng

    2017-11-01

    The thickness of lunar regolith is an important index of evaluating the quantity of lunar resources such as 3He and relative geologic ages. Lunar penetrating radar (LPR) experiment of Chang'E-3 mission provided an opportunity of in situ lunar subsurface structure measurement in the northern mare imbrium area. However, prior work on analyzing LPR data obtained quite different conclusions of lunar regolith structure mainly because of the missing of clear interface reflectors in radar image. In this paper, we utilized finite-difference time-domain (FDTD) method and three models of regolith structures with different rock density, number of layers, shapes of interfaces, and etc. to simulate the LPR signals for the interpretation of radar image. The simulation results demonstrate that the scattering signals caused by numerous buried rocks in the regolith can mask the horizontal reflectors, and the die-out of radar echo does not indicate the bottom of lunar regolith layer and data processing such as migration method could recover some of the subsurface information but also result in fake signals. Based on analysis of simulation results, we conclude that LPR results uncover the subsurface layered structure containing the rework zone with multiple ejecta blankets of small crater, the ejecta blanket of Chang'E-3 crater, and the transition zone and estimate the thickness of the detected layer is about 3.25 m.

  4. Erosion of Northern Hemisphere blanket peatlands under 21st-century climate change

    NASA Astrophysics Data System (ADS)

    Li, Pengfei; Holden, Joseph; Irvine, Brian; Mu, Xingmin

    2017-04-01

    Peatlands are important terrestrial carbon stores particularly in the Northern Hemisphere. Many peatlands, such as those in the British Isles, Sweden, and Canada, have undergone increased erosion, resulting in degraded water quality and depleted soil carbon stocks. It is unclear how climate change may impact future peat erosion. Here we use a physically based erosion model (Pan-European Soil Erosion Risk Assessment-PEAT), driven by seven different global climate models (GCMs), to predict fluvial blanket peat erosion in the Northern Hemisphere under 21st-century climate change. After an initial decline, total hemispheric blanket peat erosion rates are found to increase during 2070-2099 (2080s) compared with the baseline period (1961-1990) for most of the GCMs. Regional erosion variability is high with changes to baseline ranging between -1.27 and +21.63 t ha-1 yr-1 in the 2080s. These responses are driven by effects of temperature (generally more dominant) and precipitation change on weathering processes. Low-latitude and warm blanket peatlands are at most risk to fluvial erosion under 21st-century climate change.

  5. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    ERIC Educational Resources Information Center

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  6. Students' Progress throughout Examination Process as a Markov Chain

    ERIC Educational Resources Information Center

    Hlavatý, Robert; Dömeová, Ludmila

    2014-01-01

    The paper is focused on students of Mathematical methods in economics at the Czech university of life sciences (CULS) in Prague. The idea is to create a model of students' progress throughout the whole course using the Markov chain approach. Each student has to go through various stages of the course requirements where his success depends on the…

  7. Tracking Skill Acquisition with Cognitive Diagnosis Models: A Higher-Order, Hidden Markov Model with Covariates

    ERIC Educational Resources Information Center

    Wang, Shiyu; Yang, Yan; Culpepper, Steven Andrew; Douglas, Jeffrey A.

    2018-01-01

    A family of learning models that integrates a cognitive diagnostic model and a higher-order, hidden Markov model in one framework is proposed. This new framework includes covariates to model skill transition in the learning environment. A Bayesian formulation is adopted to estimate parameters from a learning model. The developed methods are…

  8. Chronic escitalopram treatment attenuated the accelerated rapid eye movement sleep transitions after selective rapid eye movement sleep deprivation: a model-based analysis using Markov chains.

    PubMed

    Kostyalik, Diána; Vas, Szilvia; Kátai, Zita; Kitka, Tamás; Gyertyán, István; Bagdy, Gyorgy; Tóthfalusi, László

    2014-11-19

    Shortened rapid eye movement (REM) sleep latency and increased REM sleep amount are presumed biological markers of depression. These sleep alterations are also observable in several animal models of depression as well as during the rebound sleep after selective REM sleep deprivation (RD). Furthermore, REM sleep fragmentation is typically associated with stress procedures and anxiety. The selective serotonin reuptake inhibitor (SSRI) antidepressants reduce REM sleep time and increase REM latency after acute dosing in normal condition and even during REM rebound following RD. However, their therapeutic outcome evolves only after weeks of treatment, and the effects of chronic treatment in REM-deprived animals have not been studied yet. Chronic escitalopram- (10 mg/kg/day, osmotic minipump for 24 days) or vehicle-treated rats were subjected to a 3-day-long RD on day 21 using the flower pot procedure or kept in home cage. On day 24, fronto-parietal electroencephalogram, electromyogram and motility were recorded in the first 2 h of the passive phase. The observed sleep patterns were characterized applying standard sleep metrics, by modelling the transitions between sleep phases using Markov chains and by spectral analysis. Based on Markov chain analysis, chronic escitalopram treatment attenuated the REM sleep fragmentation [accelerated transition rates between REM and non-REM (NREM) stages, decreased REM sleep residence time between two transitions] during the rebound sleep. Additionally, the antidepressant avoided the frequent awakenings during the first 30 min of recovery period. The spectral analysis showed that the SSRI prevented the RD-caused elevation in theta (5-9 Hz) power during slow-wave sleep. Conversely, based on the aggregate sleep metrics, escitalopram had only moderate effects and it did not significantly attenuate the REM rebound after RD. In conclusion, chronic SSRI treatment is capable of reducing several effects on sleep which might be the consequence of the sub-chronic stress caused by the flower pot method. These data might support the antidepressant activity of SSRIs, and may allude that investigating the rebound period following the flower pot protocol could be useful to detect antidepressant drug response. Markov analysis is a suitable method to study the sleep pattern.

  9. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  10. A model for wind-extension of the Copernicus ejecta blanket

    NASA Technical Reports Server (NTRS)

    Rehfuss, D. E.; Michael, D.; Anselmo, J. C.; Kincheloe, N. K.

    1977-01-01

    The interaction between crater ejecta and the transient wind from impact-shock vaporization is discussed. Based partly on Shoemaker's (1962) ballistic model of the Copernicus ejecta and partly on Rehfuss' (1972) treatment of lunar winds, a simple model is developed which indicates that if Copernicus were formed by a basaltic meteorite impacting at 20 km/s, then 3% of the ejecta mass would be sent beyond the maximum range expected from purely ballistic trajectories. That 3% mass would, however, shift the position of the outer edge of the ejecta blanket more than 400% beyond the edge of the ballistic blanket. For planetary bodies lacking an intrinsic atmosphere, the present model indicates that this form of hyperballistic transport can be very significant for small (no more than about 1 kg) ejecta fragments.

  11. Clustering Multivariate Time Series Using Hidden Markov Models

    PubMed Central

    Ghassempour, Shima; Girosi, Federico; Maeder, Anthony

    2014-01-01

    In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs), where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers. PMID:24662996

  12. Asteroid mass estimation using Markov-chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Siltala, Lauri; Granvik, Mikael

    2017-11-01

    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to an inverse problem in at least 13 dimensions where the aim is to derive the mass of the perturbing asteroid(s) and six orbital elements for both the perturbing asteroid(s) and the test asteroid(s) based on astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching' approximation, in which the asteroids' orbital elements are not fitted, thereby reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms with particular focus on the MCMC algorithm, and present example results using both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.

  13. Girsanov reweighting for path ensembles and Markov state models

    NASA Astrophysics Data System (ADS)

    Donati, L.; Hartmann, C.; Keller, B. G.

    2017-06-01

    The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.

  14. EMG-based speech recognition using hidden markov models with global control variables.

    PubMed

    Lee, Ki-Seung

    2008-03-01

    It is well known that a strong relationship exists between human voices and the movement of articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The sequence of EMG signals for each word is modelled by a hidden Markov model (HMM) framework. The main objective of the work involves building a model for state observation density when multichannel observation sequences are given. The proposed model reflects the dependencies between each of the EMG signals, which are described by introducing a global control variable. We also develop an efficient model training method, based on a maximum likelihood criterion. In a preliminary study, 60 isolated words were used as recognition variables. EMG signals were acquired from three articulatory facial muscles. The findings indicate that such a system may have the capacity to recognize speech signals with an accuracy of up to 87.07%, which is superior to the independent probabilistic model.

  15. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    NASA Astrophysics Data System (ADS)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  16. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Current Trends of Blanket Research and Deveopment in Japan 4.Blanket Technology Development Using ITER for Demonstration and Commercial Fusion Power Plant

    NASA Astrophysics Data System (ADS)

    Akiba, Masato; Jitsukawa, Shiroh; Muroga, Takeo

    This paper describes the status of blanket technology and material development for fusion power demonstration plants and commercial fusion plants. In particular, the ITER Test Blanket Module, IFMIF, JAERI/DOE HFIR and JUPITER-II projects are highlighted, which have the important role to develop these technology. The ITER Test Blanket Module project has been conducted to demonstrate tritium breeding and power generation using test blanket modules, which will be installed into the ITER facility. For structural material development, the present research status is overviewed on reduced activation ferritic steel, vanadium alloys, and SiC/SiC composites.

  18. Thermal comfort and safety of cotton blankets warmed at 130°F and 200°F.

    PubMed

    Kelly, Patricia A; Cooper, Susan K; Krogh, Mary L; Morse, Elizabeth C; Crandall, Craig G; Winslow, Elizabeth H; Balluck, Julie P

    2013-12-01

    In 2009, the ECRI Institute recommended warming cotton blankets in cabinets set at 130°F or less. However, there is limited research to support the use of this cabinet temperature. To measure skin temperatures and thermal comfort in healthy volunteers before and after application of blankets warmed in cabinets set at 130 and 200°F, respectively, and to determine the time-dependent cooling of cotton blankets after removal from warming cabinets set at the two temperatures. Prospective, comparative, descriptive. Participants (n = 20) received one or two blankets warmed in 130 or 200°F cabinets. First, skin temperatures were measured, and thermal comfort reports were obtained at fixed timed intervals. Second, blanket temperatures (n = 10) were measured at fixed intervals after removal from the cabinets. No skin temperatures approached levels reported in the literature that cause epidermal damage. Thermal comfort reports supported using blankets from the 200°F cabinet, and blankets lost heat quickly over time. We recommend warming cotton blankets in cabinets set at 200°F or less to improve thermal comfort without compromising patient safety. Copyright © 2013 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  19. A Lagrangian Transport Eulerian Reaction Spatial (LATERS) Markov Model for Prediction of Effective Bimolecular Reactive Transport

    NASA Astrophysics Data System (ADS)

    Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi

    2017-11-01

    Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.

  20. Design of Multilayer Insulation for the Multipurpose Hydrogen Test Bed

    NASA Technical Reports Server (NTRS)

    Marlow, Weston A.

    2011-01-01

    Multilayer insulation (MLI) is a critical component for future, long term space missions. These missions will require the storage of cryogenic fuels for extended periods of time with little to no boil-off and MLI is vital due to its exceptional radiation shielding properties. Several MLI test articles were designed and fabricated which explored methods of assembling and connecting blankets, yielding results for evaluation. Insight gained, along with previous design experience, will be used in the design of the replacement blanket for the Multipurpose Hydrogen Test Bed (MHTB), which is slated for upcoming tests. Future design considerations are discussed which include mechanical testing to determine robustness of such a system, as well as cryostat testing of samples to give insight to the loss of thermal performance of sewn panels in comparison to the highly efficient, albeit laborious application of the original MHTB blanket.

  1. Electromagnetic Launch Vehicle Fairing and Acoustic Blanket Model of Received Power Using FEKO

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.

    2011-01-01

    Evaluating the impact of radio frequency transmission in vehicle fairings is important to sensitive spacecraft. This paper employees the Multilevel Fast Multipole Method (MLFMM) feature of a commercial electromagnetic tool to model the fairing electromagnetic environment in the presence of an internal transmitter. This work is an extension of the perfect electric conductor model that was used to represent the bare aluminum internal fairing cavity. This fairing model includes typical acoustic blanketing commonly used in vehicle fairings. Representative material models within FEKO were successfully used to simulate the test case.

  2. Nuclear Analysis

    NASA Technical Reports Server (NTRS)

    Clement, J. D.; Kirby, K. D.

    1973-01-01

    Exploratory calculations were performed for several gas core breeder reactor configurations. The computational method involved the use of the MACH-1 one dimensional diffusion theory code and the THERMOS integral transport theory code for thermal cross sections. Computations were performed to analyze thermal breeder concepts and nonbreeder concepts. Analysis of breeders was restricted to the (U-233)-Th breeding cycle, and computations were performed to examine a range of parameters. These parameters include U-233 to hydrogen atom ratio in the gaseous cavity, carbon to thorium atom ratio in the breeding blanket, cavity size, and blanket size.

  3. Conceptual Designing of a Reduced Moderation Pressurized Water Reactor by Use of MVP and MVP-BURN

    NASA Astrophysics Data System (ADS)

    Kugo, T.

    A conceptual design of a seed-blanket assembly PWR core with a complicated geometry and a strong heterogeneity has been carried forward by use of the continuous-energy Monte Carlo method. Through parametric survey calculations by repeated use of MVP and a lattice burn-up calculation by MVP-BURN, a seed-blanket assembly configuration suitable for a concept of RMWR has been established, by evaluating precisely reactivity, a conversion ratio and a coolant void reactivity coefficient in a realistic computation time on a super computer.

  4. Application of clustering methods: Regularized Markov clustering (R-MCL) for analyzing dengue virus similarity

    NASA Astrophysics Data System (ADS)

    Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.

    2017-07-01

    Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.

  5. Using Markov state models to study self-assembly

    PubMed Central

    Perkett, Matthew R.; Hagan, Michael F.

    2014-01-01

    Markov state models (MSMs) have been demonstrated to be a powerful method for computationally studying intramolecular processes such as protein folding and macromolecular conformational changes. In this article, we present a new approach to construct MSMs that is applicable to modeling a broad class of multi-molecular assembly reactions. Distinct structures formed during assembly are distinguished by their undirected graphs, which are defined by strong subunit interactions. Spatial inhomogeneities of free subunits are accounted for using a recently developed Gaussian-based signature. Simplifications to this state identification are also investigated. The feasibility of this approach is demonstrated on two different coarse-grained models for virus self-assembly. We find good agreement between the dynamics predicted by the MSMs and long, unbiased simulations, and that the MSMs can reduce overall simulation time by orders of magnitude. PMID:24907984

  6. Modelisation de l'historique d'operation de groupes turbine-alternateur

    NASA Astrophysics Data System (ADS)

    Szczota, Mickael

    Because of their ageing fleet, the utility managers are increasingly in needs of tools that can help them to plan efficiently maintenance operations. Hydro-Quebec started a project that aim to foresee the degradation of their hydroelectric runner, and use that information to classify the generating unit. That classification will help to know which generating unit is more at risk to undergo a major failure. Cracks linked to the fatigue phenomenon are a predominant degradation mode and the loading sequences applied to the runner is a parameter impacting the crack growth. So, the aim of this memoir is to create a generator able to generate synthetic loading sequences that are statistically equivalent to the observed history. Those simulated sequences will be used as input in a life assessment model. At first, we describe how the generating units are operated by Hydro-Quebec and analyse the available data, the analysis shows that the data are non-stationnary. Then, we review modelisation and validation methods. In the following chapter a particular attention is given to a precise description of the validation and comparison procedure. Then, we present the comparison of three kind of model : Discrete Time Markov Chains, Discrete Time Semi-Markov Chains and the Moving Block Bootstrap. For the first two models, we describe how to take account for the non-stationnarity. Finally, we show that the Markov Chain is not adapted for our case, and that the Semi-Markov chains are better when they include the non-stationnarity. The final choice between Semi-Markov Chains and the Moving Block Bootstrap depends of the user. But, with a long term vision we recommend the use of Semi-Markov chains for their flexibility. Keywords: Stochastic models, Models validation, Reliability, Semi-Markov Chains, Markov Chains, Bootstrap

  7. Monturaqui meteorite impact crater, Chile: A field test of the utility of satellite-based mapping of ejecta at small craters

    NASA Astrophysics Data System (ADS)

    Rathbun, K.; Ukstins, I.; Drop, S.

    2017-12-01

    Monturaqui Crater is a small ( 350 m diameter), simple meteorite impact crater located in the Atacama Desert of northern Chile that was emplaced in Ordovician granite overlain by discontinuous Pliocene ignimbrite. Ejecta deposits are granite and ignimbrite, with lesser amounts of dark impact melt and rare tektites and iron shale. The impact restructured existing drainage systems in the area that have subsequently eroded through the ejecta. Satellite-based mapping and modeling, including a synthesis of photographic satellite imagery and ASTER thermal infrared imagery in ArcGIS, were used to construct a basic geological interpretation of the site with special emphasis on understanding ejecta distribution patterns. This was combined with field-based mapping to construct a high-resolution geologic map of the crater and its ejecta blanket and field check the satellite-based geologic interpretation. The satellite- and modeling-based interpretation suggests a well-preserved crater with an intact, heterogeneous ejecta blanket that has been subjected to moderate erosion. In contrast, field mapping shows that the crater has a heavily-eroded rim and ejecta blanket, and the ejecta is more heterogeneous than previously thought. In addition, the erosion rate at Monturaqui is much higher than erosion rates reported elsewhere in the Atacama Desert. The bulk compositions of the target rocks at Monturaqui are similar and the ejecta deposits are highly heterogeneous, so distinguishing between them with remote sensing is less effective than with direct field observations. In particular, the resolution of available imagery for the site is too low to resolve critical details that are readily apparent in the field on the scale of 10s of cm, and which significantly alter the geologic interpretation. The limiting factors for effective remote interpretation at Monturaqui are its target composition and crater size relative to the resolution of the remote sensing methods employed. This suggests that satellite-based mapping of ejecta may have limited utility at small craters due to limitations in source resolution compared to the geology of the site in question.

  8. A Shellcode Detection Method Based on Full Native API Sequence and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Cheng, Yixuan; Fan, Wenqing; Huang, Wei; An, Jing

    2017-09-01

    Dynamic monitoring the behavior of a program is widely used to discriminate between benign program and malware. It is usually based on the dynamic characteristics of a program, such as API call sequence or API call frequency to judge. The key innovation of this paper is to consider the full Native API sequence and use the support vector machine to detect the shellcode. We also use the Markov chain to extract and digitize Native API sequence features. Our experimental results show that the method proposed in this paper has high accuracy and low detection rate.

  9. Information-Theoretic Performance Analysis of Sensor Networks via Markov Modeling of Time Series Data.

    PubMed

    Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K

    2018-06-01

    This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.

  10. Bayesian prestack seismic inversion with a self-adaptive Huber-Markov random-field edge protection scheme

    NASA Astrophysics Data System (ADS)

    Tian, Yu-Kun; Zhou, Hui; Chen, Han-Ming; Zou, Ya-Ming; Guan, Shou-Jun

    2013-12-01

    Seismic inversion is a highly ill-posed problem, due to many factors such as the limited seismic frequency bandwidth and inappropriate forward modeling. To obtain a unique solution, some smoothing constraints, e.g., the Tikhonov regularization are usually applied. The Tikhonov method can maintain a global smooth solution, but cause a fuzzy structure edge. In this paper we use Huber-Markov random-field edge protection method in the procedure of inverting three parameters, P-velocity, S-velocity and density. The method can avoid blurring the structure edge and resist noise. For the parameter to be inverted, the Huber-Markov random-field constructs a neighborhood system, which further acts as the vertical and lateral constraints. We use a quadratic Huber edge penalty function within the layer to suppress noise and a linear one on the edges to avoid a fuzzy result. The effectiveness of our method is proved by inverting the synthetic data without and with noises. The relationship between the adopted constraints and the inversion results is analyzed as well.

  11. Thin Thermal-Insulation Blankets for Very High Temperatures

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.

    2003-01-01

    Thermal-insulation blankets of a proposed type would be exceptionally thin and would endure temperatures up to 2,100 C. These blankets were originally intended to protect components of the NASA Solar Probe spacecraft against radiant heating at its planned closest approach to the Sun (a distance of 4 solar radii). These blankets could also be used on Earth to provide thermal protection in special applications (especially in vacuum chambers) for which conventional thermal-insulation blankets would be too thick or would not perform adequately.

  12. Neutronics Design of a Thorium-Fueled Fission Blanket for LIFE (Laser Inertial Fusion-based Energy)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powers, J; Abbott, R; Fratoni, M

    The Laser Inertial Fusion-based Energy (LIFE) project at LLNL includes development of hybrid fusion-fission systems for energy generation. These hybrid LIFE engines use high-energy neutrons from laser-based inertial confinement fusion to drive a subcritical blanket of fission fuel that surrounds the fusion chamber. The fission blanket contains TRISO fuel particles packed into pebbles in a flowing bed geometry cooled by a molten salt (flibe). LIFE engines using a thorium fuel cycle provide potential improvements in overall fuel cycle performance and resource utilization compared to using depleted uranium (DU) and may minimize waste repository and proliferation concerns. A preliminary engine designmore » with an initial loading of 40 metric tons of thorium can maintain a power level of 2000 MW{sub th} for about 55 years, at which point the fuel reaches an average burnup level of about 75% FIMA. Acceptable performance was achieved without using any zero-flux environment 'cooling periods' to allow {sup 233}Pa to decay to {sup 233}U; thorium undergoes constant irradiation in this LIFE engine design to minimize proliferation risks and fuel inventory. Vast reductions in end-of-life (EOL) transuranic (TRU) inventories compared to those produced by a similar uranium system suggest reduced proliferation risks. Decay heat generation in discharge fuel appears lower for a thorium LIFE engine than a DU engine but differences in radioactive ingestion hazard are less conclusive. Future efforts on development of thorium-fueled LIFE fission blankets engine development will include design optimization, fuel performance analysis work, and further waste disposal and nonproliferation analyses.« less

  13. An Evaluation of a Markov Chain Monte Carlo Method for the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of the Markov Chain Monte Carlo (MCMC) procedure Gibbs sampling was considered for estimation of item parameters of the two-parameter logistic model. Data for the Law School Admission Test (LSAT) Section 6 were analyzed to illustrate the MCMC procedure. In addition, simulated data sets were analyzed using the MCMC, marginal Bayesian…

  14. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    ERIC Educational Resources Information Center

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  15. Multi-site Stochastic Simulation of Daily Streamflow with Markov Chain and KNN Algorithm

    NASA Astrophysics Data System (ADS)

    Mathai, J.; Mujumdar, P.

    2017-12-01

    A key focus of this study is to develop a method which is physically consistent with the hydrologic processes that can capture short-term characteristics of daily hydrograph as well as the correlation of streamflow in temporal and spatial domains. In complex water resource systems, flow fluctuations at small time intervals require that discretisation be done at small time scales such as daily scales. Also, simultaneous generation of synthetic flows at different sites in the same basin are required. We propose a method to equip water managers with a streamflow generator within a stochastic streamflow simulation framework. The motivation for the proposed method is to generate sequences that extend beyond the variability represented in the historical record of streamflow time series. The method has two steps: In step 1, daily flow is generated independently at each station by a two-state Markov chain, with rising limb increments randomly sampled from a Gamma distribution and the falling limb modelled as exponential recession and in step 2, the streamflow generated in step 1 is input to a nonparametric K-nearest neighbor (KNN) time series bootstrap resampler. The KNN model, being data driven, does not require assumptions on the dependence structure of the time series. A major limitation of KNN based streamflow generators is that they do not produce new values, but merely reshuffle the historical data to generate realistic streamflow sequences. However, daily flow generated using the Markov chain approach is capable of generating a rich variety of streamflow sequences. Furthermore, the rising and falling limbs of daily hydrograph represent different physical processes, and hence they need to be modelled individually. Thus, our method combines the strengths of the two approaches. We show the utility of the method and improvement over the traditional KNN by simulating daily streamflow sequences at 7 locations in the Godavari River basin in India.

  16. Regression without truth with Markov chain Monte-Carlo

    NASA Astrophysics Data System (ADS)

    Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga

    2017-03-01

    Regression without truth (RWT) is a statistical technique for estimating error model parameters of each method in a group of methods used for measurement of a certain quantity. A very attractive aspect of RWT is that it does not rely on a reference method or "gold standard" data, which is otherwise difficult RWT was used for a reference-free performance comparison of several methods for measuring left ventricular ejection fraction (EF), i.e. a percentage of blood leaving the ventricle each time the heart contracts, and has since been applied for various other quantitative imaging biomarkerss (QIBs). Herein, we show how Markov chain Monte-Carlo (MCMC), a computational technique for drawing samples from a statistical distribution with probability density function known only up to a normalizing coefficient, can be used to augment RWT to gain a number of important benefits compared to the original approach based on iterative optimization. For instance, the proposed MCMC-based RWT enables the estimation of joint posterior distribution of the parameters of the error model, straightforward quantification of uncertainty of the estimates, estimation of true value of the measurand and corresponding credible intervals (CIs), does not require a finite support for prior distribution of the measureand generally has a much improved robustness against convergence to non-global maxima. The proposed approach is validated using synthetic data that emulate the EF data for 45 patients measured with 8 different methods. The obtained results show that 90% CI of the corresponding parameter estimates contain the true values of all error model parameters and the measurand. A potential real-world application is to take measurements of a certain QIB several different methods and then use the proposed framework to compute the estimates of the true values and their uncertainty, a vital information for diagnosis based on QIB.

  17. Normal operation and maintenance safety lessons from the ITER US PbLi test blanket module program for a US FNSF and DEMO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L. C. Cadwallader; C. P. C. Wong; M. Abdou

    2014-10-01

    A leading power reactor breeding blanket candidate for a fusion demonstration power plant (DEMO) being pursued by the US Fusion Community is the Dual Coolant Lead Lithium (DCLL) concept. The safety hazards associated with the DCLL concept as a reactor blanket have been examined in several US design studies. These studies identify the largest radiological hazards as those associated with the dust generation by plasma erosion of plasma blanket module first walls, oxidation of blanket structures at high temperature in air or steam, inventories of tritium bred in or permeating through the ferritic steel structures of the blanket module andmore » blanket support systems, and the 210Po and 203Hg produced in the PbLi breeder/coolant. What these studies lack is the scrutiny associated with a licensing review of the DCLL concept. An insight into this process was gained during the US participation in the International Thermonuclear Experimental Reactor (ITER) Test Blanket Module (TBM) Program. In this paper we discuss the lessons learned during this activity and make safety proposals for the design of a Fusion Nuclear Science Facility (FNSF) or a DEMO that employs a lead lithium breeding blanket.« less

  18. TMAP: Tübingen NLTE Model-Atmosphere Package

    NASA Astrophysics Data System (ADS)

    Werner, Klaus; Dreizler, Stefan; Rauch, Thomas

    2012-12-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) is a tool to calculate stellar atmospheres in spherical or plane-parallel geometry in hydrostatic and radiative equilibrium allowing departures from local thermodynamic equilibrium (LTE) for the population of atomic levels. It is based on the Accelerated Lambda Iteration (ALI) method and is able to account for line blanketing by metals. All elements from hydrogen to nickel may be included in the calculation with model atoms which are tailored for the aims of the user.

  19. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics.

    PubMed

    Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2012-09-25

    Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.

  20. Parallel Markov chain Monte Carlo - bridging the gap to high-performance Bayesian computation in animal breeding and genetics

    PubMed Central

    2012-01-01

    Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363

  1. Bayesian Analysis of Biogeography when the Number of Areas is Large

    PubMed Central

    Landis, Michael J.; Matzke, Nicholas J.; Moore, Brian R.; Huelsenbeck, John P.

    2013-01-01

    Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a “data-augmentation” approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea. [ancestral area analysis; Bayesian biogeographic inference; data augmentation; historical biogeography; Markov chain Monte Carlo.] PMID:23736102

  2. Characterization of Sputtered Nickel-Titanium (NiTi) Stress and Thermally Actuated Cantilever Bimorphs Based on NiTi Shape Memory Alloy (SMA)

    DTIC Science & Technology

    2015-11-01

    necessary anneal . Following this, a thin film of NiTi was blanket sputtered at 600 °C. This NiTi blanket layer was then wet -etch patterned using a...varying the sputter parameters during NiTi deposition, such as thickness, substrate temperature during deposition and anneal , and argon pressure during...6 Fig. 4 Surface texture comparison between NiTi sputtered at RT, then annealed at 600 °C, and NiTi

  3. Updated neutronics analyses of a water cooled ceramic breeder blanket for the CFETR

    NASA Astrophysics Data System (ADS)

    Xiaokang, ZHANG; Songlin, LIU; Xia, LI; Qingjun, ZHU; Jia, LI

    2017-11-01

    The water cooled ceramic breeder (WCCB) blanket employing pressurized water as a coolant is one of the breeding blanket candidates for the China Fusion Engineering Test Reactor (CFETR). Some updating of neutronics analyses was needed, because there were changes in the neutronics performance of the blanket as several significant modifications and improvements have been adopted for the WCCB blanket, including the optimization of radial build-up and customized structure for each blanket module. A 22.5 degree toroidal symmetrical torus sector 3D neutronics model containing the updated design of the WCCB blanket modules was developed for the neutronics analyses. The tritium breeding capability, nuclear heating power, radiation damage, and decay heat were calculated by the MCNP and FISPACT code. The results show that the packing factor and 6Li enrichment of the breeder should both be no less than 0.8 to ensure tritium self-sufficiency. The nuclear heating power of the blanket under 200 MW fusion power reaches 201.23 MW. The displacement per atom per full power year (FPY) of the plasma-facing component and first wall reach 0.90 and 2.60, respectively. The peak H production rate reaches 150.79 appm/FPY and the peak He production reaches 29.09 appm/FPY in blanket module #3. The total decay heat of the blanket modules is 2.64 MW at 1 s after shutdown and the average decay heat density can reach 11.09 kW m-3 at that time. The decay heat density of the blanket modules slowly decreases to lower than 10 W m-3 in more than ten years.

  4. Multivariate longitudinal data analysis with mixed effects hidden Markov models.

    PubMed

    Raffa, Jesse D; Dubin, Joel A

    2015-09-01

    Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.

  5. Stochastic Dynamics through Hierarchically Embedded Markov Chains

    NASA Astrophysics Data System (ADS)

    Vasconcelos, Vítor V.; Santos, Fernando P.; Santos, Francisco C.; Pacheco, Jorge M.

    2017-02-01

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects—such as mutations in evolutionary dynamics and a random exploration of choices in social systems—including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  6. Stochastic Dynamics through Hierarchically Embedded Markov Chains.

    PubMed

    Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M

    2017-02-03

    Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.

  7. exocartographer: Constraining surface maps orbital parameters of exoplanets

    NASA Astrophysics Data System (ADS)

    Farr, Ben; Farr, Will M.; Cowan, Nicolas B.; Haggard, Hal M.; Robinson, Tyler

    2018-05-01

    exocartographer solves the exo-cartography inverse problem. This flexible forward-modeling framework, written in Python, retrieves the albedo map and spin geometry of a planet based on time-resolved photometry; it uses a Markov chain Monte Carlo method to extract albedo maps and planet spin and their uncertainties. Gaussian Processes use the data to fit for the characteristic length scale of the map and enforce smooth maps.

  8. Opinions in Federated Search: University of Lugano at TREC 2014 Federated Web Search Track

    DTIC Science & Technology

    2014-11-01

    Opinions in Federated Search : University of Lugano at TREC 2014 Federated Web Search Track Anastasia Giachanou 1 , Ilya Markov 2 and Fabio Crestani 1...ranking based on sentiment using the retrieval-interpolated diversification method. Keywords: federated search , resource selection, vertical selection...performance. Federated search , also known as Distributed Information Retrieval (DIR), o↵ers the means of simultaneously searching multiple information

  9. Parametric State Space Structuring

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Tilgner, Marco

    1997-01-01

    Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.

  10. 75 FR 38459 - Certain Woven Electric Blankets From the People's Republic of China: Final Determination of Sales...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-02

    ... Industries (``Perfect Fit''), a U.S. importer of knitted electric blankets, submitted comments on the scope... investigation to include the following two statements: (1) ``knitted electric blankets in any form, whether... acknowledged that knitted electric blankets and electric mattress pads are not within the scope of the U.S...

  11. Ceramic insulation/multifoil composite for thermal protection of reentry spacecraft

    NASA Technical Reports Server (NTRS)

    Pitts, W. C.; Kourtides, D. A.

    1989-01-01

    A new type of insulation blanket called Composite Flexible Blanket Insulation is proposed for thermal protection of advanced spacecraft in regions where the maximum temperature is not excessive. The blanket is a composite of two proven insulation materials: ceramic insulation blankets from Space Shuttle technology and multilayer insulation blankets from spacecraft thermal control technology. A potential heatshield weight saving of up to 500 g/sq m is predicted. The concept is described; proof of concept experimental data are presented; and a spaceflight experiment to demonstrate its actual performance is discussed.

  12. Packed fluidized bed blanket for fusion reactor

    DOEpatents

    Chi, John W. H.

    1984-01-01

    A packed fluidized bed blanket for a fusion reactor providing for efficient radiation absorption for energy recovery, efficient neutron absorption for nuclear transformations, ease of blanket removal, processing and replacement, and on-line fueling/refueling. The blanket of the reactor contains a bed of stationary particles during reactor operation, cooled by a radial flow of coolant. During fueling/refueling, an axial flow is introduced into the bed in stages at various axial locations to fluidize the bed. When desired, the fluidization flow can be used to remove particles from the blanket.

  13. KSC-04pd0620

    NASA Image and Video Library

    2004-03-24

    KENNEDY SPACE CENTER, FLA. -- In the Thermal Protection System Facility, Pilar Ryan, with United Space Alliance, stitches a piece of insulation blanket for Atlantis. In the foreground is a ring inside of which the blankets will be sewn to fit in the orbiter's nose cap. The blankets consist of layered, pure silica felt sandwiched between a layer of silica fabric (the hot side) and a layer of S-Glass fabric. The blankets are semi-rigid and can be made as large as 30 inches by 30 inches. The blanket is through-stitched with pure silica thread in a 1-inch grid pattern. After fabrication, the blanket is bonded directly to the vehicle structure and finally coated with a high purity silica coating that improves erosion resistance.

  14. A New Fire Hazard for MR Imaging Systems: Blankets-Case Report.

    PubMed

    Bertrand, Anne; Brunel, Sandrine; Habert, Marie-Odile; Soret, Marine; Jaffre, Simone; Capeau, Nicolas; Bourseul, Laetitia; Dufour-Claude, Isabelle; Kas, Aurélie; Dormont, Didier

    2018-02-01

    In this report, a case of fire in a positron emission tomography (PET)/magnetic resonance (MR) imaging system due to blanket combustion is discussed. Manufacturing companies routinely use copper fibers for blanket fabrication, and these fibers may remain within the blanket hem. By folding a blanket with these copper fibers within an MR imaging system, one can create an electrical current loop with a major risk of local excessive heating, burn injury, and fire. This hazard applies to all MR imaging systems. Hybrid PET/MR imaging systems may be particularly vulnerable to this situation, because blankets are commonly used for fluorodeoxyglucose PET to maintain a normal body temperature and to avoid fluorodeoxyglucose uptake in brown adipose tissue. © RSNA, 2017.

  15. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  16. Markov Jump-Linear Performance Models for Recoverable Flight Control Computers

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Gray, W. Steven; Gonzalez, Oscar R.

    2004-01-01

    Single event upsets in digital flight control hardware induced by atmospheric neutrons can reduce system performance and possibly introduce a safety hazard. One method currently under investigation to help mitigate the effects of these upsets is NASA Langley s Recoverable Computer System. In this paper, a Markov jump-linear model is developed for a recoverable flight control system, which will be validated using data from future experiments with simulated and real neutron environments. The method of tracking error analysis and the plan for the experiments are also described.

  17. Numerical solutions for patterns statistics on Markov chains.

    PubMed

    Nuel, Gregory

    2006-01-01

    We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.

  18. 48 CFR 2913.201 - General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ACQUISITION PROCEDURES Actions at or Below the Micro-Purchase Threshold 2913.201 General. The Government... micro-purchase threshold. Other small purchase methods (blanket purchase agreements, third party drafts...

  19. The introduction of hydrogen bond and hydrophobicity effects into the rotational isomeric states model for conformational analysis of unfolded peptides.

    PubMed

    Engin, Ozge; Sayar, Mehmet; Erman, Burak

    2009-01-13

    Relative contributions of local and non-local interactions to the unfolded conformations of peptides are examined by using the rotational isomeric states model which is a Markov model based on pairwise interactions of torsion angles. The isomeric states of a residue are well described by the Ramachandran map of backbone torsion angles. The statistical weight matrices for the states are determined by molecular dynamics simulations applied to monopeptides and dipeptides. Conformational properties of tripeptides formed from combinations of alanine, valine, tyrosine and tryptophan are investigated based on the Markov model. Comparison with molecular dynamics simulation results on these tripeptides identifies the sequence-distant long-range interactions that are missing in the Markov model. These are essentially the hydrogen bond and hydrophobic interactions that are obtained between the first and the third residue of a tripeptide. A systematic correction is proposed for incorporating these long-range interactions into the rotational isomeric states model. Preliminary results suggest that the Markov assumption can be improved significantly by renormalizing the statistical weight matrices to include the effects of the long-range correlations.

  20. The introduction of hydrogen bond and hydrophobicity effects into the rotational isomeric states model for conformational analysis of unfolded peptides

    NASA Astrophysics Data System (ADS)

    Engin, Ozge; Sayar, Mehmet; Erman, Burak

    2009-03-01

    Relative contributions of local and non-local interactions to the unfolded conformations of peptides are examined by using the rotational isomeric states model which is a Markov model based on pairwise interactions of torsion angles. The isomeric states of a residue are well described by the Ramachandran map of backbone torsion angles. The statistical weight matrices for the states are determined by molecular dynamics simulations applied to monopeptides and dipeptides. Conformational properties of tripeptides formed from combinations of alanine, valine, tyrosine and tryptophan are investigated based on the Markov model. Comparison with molecular dynamics simulation results on these tripeptides identifies the sequence-distant long-range interactions that are missing in the Markov model. These are essentially the hydrogen bond and hydrophobic interactions that are obtained between the first and the third residue of a tripeptide. A systematic correction is proposed for incorporating these long-range interactions into the rotational isomeric states model. Preliminary results suggest that the Markov assumption can be improved significantly by renormalizing the statistical weight matrices to include the effects of the long-range correlations.

  1. Effects of stochastic interest rates in decision making under risk: A Markov decision process model for forest management

    Treesearch

    Mo Zhou; Joseph Buongiorno

    2011-01-01

    Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...

  2. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Treesearch

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  3. Markov Analysis of Sleep Dynamics

    NASA Astrophysics Data System (ADS)

    Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.

    2009-05-01

    A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.

  4. Analysis of Time-Dependent Tritium Breeding Capability of Water Cooled Ceramic Breeder Blanket for CFETR

    NASA Astrophysics Data System (ADS)

    Gao, Fangfang; Zhang, Xiaokang; Pu, Yong; Zhu, Qingjun; Liu, Songlin

    2016-08-01

    Attaining tritium self-sufficiency is an important mission for the Chinese Fusion Engineering Testing Reactor (CFETR) operating on a Deuterium-Tritium (D-T) fuel cycle. It is necessary to study the tritium breeding ratio (TBR) and breeding tritium inventory variation with operation time so as to provide an accurate data for dynamic modeling and analysis of the tritium fuel cycle. A water cooled ceramic breeder (WCCB) blanket is one candidate of blanket concepts for the CFETR. Based on the detailed 3D neutronics model of CFETR with the WCCB blanket, the time-dependent TBR and tritium surplus were evaluated by a coupling calculation of the Monte Carlo N-Particle Transport Code (MCNP) and the fusion activation code FISPACT-2007. The results indicated that the TBR and tritium surplus of the WCCB blanket were a function of operation time and fusion power due to the Li consumption in breeder and material activation. In addition, by comparison with the results calculated by using the 3D neutronics model and employing the transfer factor constant from 1D to 3D, it is noted that 1D analysis leads to an over-estimation for the time-dependent tritium breeding capability when fusion power is larger than 1000 MW. supported by the National Magnetic Confinement Fusion Science Program of China (Nos. 2013GB108004, 2015GB108002, and 2014GB119000), and by National Natural Science Foundation of China (No. 11175207)

  5. Conceptual approach study of a 200 watt per kilogram solar array, phase 1

    NASA Technical Reports Server (NTRS)

    Rayl, G. J.; Speight, K. M.; Stanhouse, R. W.

    1977-01-01

    Two alternative designs were studied; one a retractable rollout design and the other a nonretractable foldout configuration. An end of life (EOL) power for either design of 0.79 beginning of life (BOL) is predicted based on one solar flare during a 3 year interplanetary mission. Both array configurations incorporate the features of flexible substrates and cover sheets. A power capacity of 10 kilowatt is achieved in a blanket area of 76 sq m with an area utilization factor of 0.8. A single array consists of two identical solar cell blankets deployed concurrently by a single, coilable longeron boom. An out of plane angle of 8-1/4 deg is maintained between the two blankets so that the inherent inplane stiffness of the blankets may be used to obtain out of plane stiffness. This V-stiffened design results in a 67% reduction in the stiffness requirement for the boom. Since boom mass scales with stiffness, a lower requirement on boom stiffness results in a lower mass for the boom. These solar arrays are designed to be compatible with the shuttle launch environment and shuttle cargo bay size limitations.

  6. Coupling of electromagnetics and structural/fluid dynamics - application to the dual coolant blanket subjected to plasma disruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, T.

    Some aspects concerning the coupling of quasi-stationary electromagnetics and the dynamics of structure and fluid are investigated. The necessary equations are given in a dimensionless form. The dimensionless parameters in these equations are used to evaluate the importance of the different coupling effects. A finite element formulation of the eddy-current damping in solid structures is developed. With this formulation, an existing finite element method (FEM) structural dynamics code is extended and coupled to an FEM eddy-current code. With this program system, the influence of the eddy-current damping on the dynamic loading of the dual coolant blanket during a centered plasmamore » disruption is determined. The analysis proves that only in loosely fixed or soft structures will eddy-current damping considerably reduce the resulting stresses. Additionally, the dynamic behavior of the liquid metal in the blankets` poloidal channels is described with a simple two-dimensional magnetohydrodynamic approach. The analysis of the dimensionless parameters shows that for small-scale experiments, which are designed to model the coupled electromagnetic and structural/fluid dynamic effects in such a blanket, the same magnetic fields must be applied as in the real fusion device. This will be the easiest way to design experiments that produce transferable results. 10 refs., 7 figs.« less

  7. A new accounting system for financial balance based on personnel cost after the introduction of a DPC/DRG system.

    PubMed

    Nakagawa, Yoshiaki; Takemura, Tadamasa; Yoshihara, Hiroyuki; Nakagawa, Yoshinobu

    2011-04-01

    A hospital director must estimate the revenues and expenses not only in a hospital but also in each clinical division to determine the proper management strategy. A new prospective payment system based on the Diagnosis Procedure Combination (DPC/PPS) introduced in 2003 has made the attribution of revenues and expenses for each clinical department very complicated because of the intricate involvement between the overall or blanket component and a fee-for service (FFS). Few reports have so far presented a programmatic method for the calculation of medical costs and financial balance. A simple method has been devised, based on personnel cost, for calculating medical costs and financial balance. Using this method, one individual was able to complete the calculations for a hospital which contains 535 beds and 16 clinics, without using the central hospital computer system.

  8. A multi-level solution algorithm for steady-state Markov chains

    NASA Technical Reports Server (NTRS)

    Horton, Graham; Leutenegger, Scott T.

    1993-01-01

    A new iterative algorithm, the multi-level algorithm, for the numerical solution of steady state Markov chains is presented. The method utilizes a set of recursively coarsened representations of the original system to achieve accelerated convergence. It is motivated by multigrid methods, which are widely used for fast solution of partial differential equations. Initial results of numerical experiments are reported, showing significant reductions in computation time, often an order of magnitude or more, relative to the Gauss-Seidel and optimal SOR algorithms for a variety of test problems. The multi-level method is compared and contrasted with the iterative aggregation-disaggregation algorithm of Takahashi.

  9. Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains.

    PubMed

    Reich, W; Scheuermann, G

    2012-12-01

    Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.

  10. Global dynamics of a stochastic neuronal oscillator

    NASA Astrophysics Data System (ADS)

    Yamanobe, Takanobu

    2013-11-01

    Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.

  11. Global dynamics of a stochastic neuronal oscillator.

    PubMed

    Yamanobe, Takanobu

    2013-11-01

    Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.

  12. 48 CFR 13.303-1 - General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... (a) A blanket purchase agreement (BPA) is a simplified method of filling anticipated repetitive needs..., projects, or functions. Such organizations, for example, may be organized supply points, separate...

  13. Silver Teflon blanket: LDEF tray C-08

    NASA Technical Reports Server (NTRS)

    Crutcher, E. Russ; Nishimura, L. S.; Warner, K. J.; Wascher, W. W.

    1992-01-01

    A study of the Teflon blanket surface at the edge of tray C-08 illustrates the complexity of the microenvironments on the Long Duration Exposure Facility (LDEF). The distribution of particulate contaminants varied dramatically over a distance of half a centimeter (quarter of an inch) near the edge of the blanket. The geometry and optical effects of the atomic oxygen erosion varied significantly over the few centimeters where the blanket folded over the edge of the tray resulting in a variety of orientations to the atomic oxygen flux. A very complex region of combined mechanical and atomic oxygen damage occurred where the blanket contacted the edge of the tray. A brown film deposit apparently fixed by ultraviolet light traveling by reflection through the Teflon film was conspicuous beyond the tray contract zone. Chemical and structural analysis of the surface of the brown film and beyond toward the protected edge of the blanket indicated some penetration of energetic atomic oxygen at least five millimeters past the blanket-tray contact interface.

  14. Development and validation of a Markov microsimulation model for the economic evaluation of treatments in osteoporosis.

    PubMed

    Hiligsmann, Mickaël; Ethgen, Olivier; Bruyère, Olivier; Richy, Florent; Gathon, Henry-Jean; Reginster, Jean-Yves

    2009-01-01

    Markov models are increasingly used in economic evaluations of treatments for osteoporosis. Most of the existing evaluations are cohort-based Markov models missing comprehensive memory management and versatility. In this article, we describe and validate an original Markov microsimulation model to accurately assess the cost-effectiveness of prevention and treatment of osteoporosis. We developed a Markov microsimulation model with a lifetime horizon and a direct health-care cost perspective. The patient history was recorded and was used in calculations of transition probabilities, utilities, and costs. To test the internal consistency of the model, we carried out an example calculation for alendronate therapy. Then, external consistency was investigated by comparing absolute lifetime risk of fracture estimates with epidemiologic data. For women at age 70 years, with a twofold increase in the fracture risk of the average population, the costs per quality-adjusted life-year gained for alendronate therapy versus no treatment were estimated at €9105 and €15,325, respectively, under full and realistic adherence assumptions. All the sensitivity analyses in terms of model parameters and modeling assumptions were coherent with expected conclusions and absolute lifetime risk of fracture estimates were within the range of previous estimates, which confirmed both internal and external consistency of the model. Microsimulation models present some major advantages over cohort-based models, increasing the reliability of the results and being largely compatible with the existing state of the art, evidence-based literature. The developed model appears to be a valid model for use in economic evaluations in osteoporosis.

  15. Adaptive quantification and longitudinal analysis of pulmonary emphysema with a hidden Markov measure field model.

    PubMed

    Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F

    2014-07-01

    The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.

  16. An Indoor Pedestrian Positioning Method Using HMM with a Fuzzy Pattern Recognition Algorithm in a WLAN Fingerprint System

    PubMed Central

    Ni, Yepeng; Liu, Jianbo; Liu, Shan; Bai, Yaxin

    2016-01-01

    With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability. PMID:27618053

  17. Multiscale hidden Markov models for photon-limited imaging

    NASA Astrophysics Data System (ADS)

    Nowak, Robert D.

    1999-06-01

    Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.

  18. Comparison of forced-air warming systems with lower body blankets using a copper manikin of the human body.

    PubMed

    Bräuer, A; English, M J M; Lorenz, N; Steinmetz, N; Perl, T; Braun, U; Weyland, W

    2003-01-01

    Forced-air warming has gained high acceptance as a measure for the prevention of intraoperative hypothermia. However, data on heat transfer with lower body blankets are not yet available. This study was conducted to determine the heat transfer efficacy of six complete lower body warming systems. Heat transfer of forced-air warmers can be described as follows:[1]Qdot;=h.DeltaT.A where Qdot; = heat transfer [W], h = heat exchange coefficient [W m-2 degrees C-1], DeltaT = temperature gradient between blanket and surface [ degrees C], A = covered area [m2]. We tested the following forced-air warmers in a previously validated copper manikin of the human body: (1) Bair Hugger and lower body blanket (Augustine Medical Inc., Eden Prairie, MN); (2) Thermacare and lower body blanket (Gaymar Industries, Orchard Park, NY); (3) WarmAir and lower body blanket (Cincinnati Sub-Zero Products, Cincinnati, OH); (4) Warm-Gard(R) and lower body blanket (Luis Gibeck AB, Upplands Väsby, Sweden); (5) Warm-Gard and reusable lower body blanket (Luis Gibeck AB); and (6) WarmTouch and lower body blanket (Mallinckrodt Medical Inc., St. Luis, MO). Heat flux and surface temperature were measured with 16 calibrated heat flux transducers. Blanket temperature was measured using 16 thermocouples. DeltaT was varied between -10 and +10 degrees C and h was determined by a linear regression analysis as the slope of DeltaT vs. heat flux. Mean DeltaT was determined for surface temperatures between 36 and 38 degrees C, because similar mean skin temperatures have been found in volunteers. The area covered by the blankets was estimated to be 0.54 m2. Heat transfer from the blanket to the manikin was different for surface temperatures between 36 degrees C and 38 degrees C. At a surface temperature of 36 degrees C the heat transfer was higher (between 13.4 W to 18.3 W) than at surface temperatures of 38 degrees C (8-11.5 W). The highest heat transfer was delivered by the Thermacare system (8.3-18.3 W), the lowest heat transfer was delivered by the Warm-Gard system with the single use blanket (8-13.4 W). The heat exchange coefficient varied between 12.5 W m-2 degrees C-1 and 30.8 W m-2 degrees C-1, mean DeltaT varied between 1.04 degrees C and 2.48 degrees C for surface temperatures of 36 degrees C and between 0.50 degrees C and 1.63 degrees C for surface temperatures of 38 degrees C. No relevant differences in heat transfer of lower body blankets were found between the different forced-air warming systems tested. Heat transfer was lower than heat transfer by upper body blankets tested in a previous study. However, forced-air warming systems with lower body blankets are still more effective than forced-air warming systems with upper body blankets in the prevention of perioperative hypothermia, because they cover a larger area of the body surface.

  19. Measuring the impact of final demand on global production system based on Markov process

    NASA Astrophysics Data System (ADS)

    Xing, Lizhi; Guan, Jun; Wu, Shan

    2018-07-01

    Input-output table is a comprehensive and detailed in describing the national economic systems, consisting of supply and demand information among various industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can depict the structural properties of social and economic systems, and reveal the complicated relationships between the inner hierarchies and the external macroeconomic functions. This paper tried to measure the globalization degree of industrial sectors on the global value chain. Firstly, it constructed inter-country input-output network models to reproduce the topological structure of global economic system. Secondly, it regarded the propagation of intermediate goods on the global value chain as Markov process and introduced counting first passage betweenness to quantify the added processing amount when globally final demand stimulates this production system. Thirdly, it analyzed the features of globalization at both global and country-sector level

  20. KSC-04pd0618

    NASA Image and Video Library

    2004-03-24

    KENNEDY SPACE CENTER, FLA. -- In the Thermal Protection System Facility, Pilar Ryan, with United Space Alliance, stitches a piece of insulation blanket for Atlantis's nose cap. The blankets consist of layered, pure silica felt sandwiched between a layer of silica fabric (the hot side) and a layer of S-Glass fabric. The blankets are semi-rigid and can be made as large as 30 inches by 30 inches. The blanket is through-stitched with pure silica thread in a 1-inch grid pattern. After fabrication, the blanket is bonded directly to the vehicle structure and finally coated with a high purity silica coating that improves erosion resistance.

  1. NUCLEAR REACTOR

    DOEpatents

    Sherman, J.; Sharbaugh, J.E.; Fauth, W.L. Jr.; Palladino, N.J.; DeHuff, P.G.

    1962-10-23

    A nuclear reactor incorporating seed and blanket assemblies is designed. Means are provided for obtaining samples of the coolant from the blanket assemblies and for varying the flow of coolant through the blanket assemblies. (AEC)

  2. Antimicrobial usage and risk of retreatment for mild to moderate clinical mastitis cases on dairy farms following on-farm bacterial culture and selective therapy.

    PubMed

    McDougall, S; Niethammer, J; Graham, E M

    2018-03-01

    To assess antimicrobial usage for treatment of mild to moderate clinical mastitis, and risk of retreatment, following implementation of an on-farm bacterial culture system and selective therapy based on culture results, and to assess compliance with treatment decision tree protocols and the level of agreement between results from on-farm culture and laboratory-based microbiology methods. Herdowners from seven dairy herds were asked to collect milk samples from cases of mild to moderate clinical mastitis between July 2015 and May 2016. All samples were cultured on-farm using a commercially available selective media and were also submitted for laboratory-based culture. Within sequential pairs of cows with mastitis, half were assigned to be treated without regard to culture results (Blanket group), and half were treated based on the on-farm culture results (Selective group) according to decision tree diagrams provided to the farmers. Culture results, treatments, and retreatments for clinical mastitis were recorded. The sum of the daily doses of antimicrobials used per cow, the number of retreatments and interval to first retreatment were compared between treatment groups. The geometric mean sum of daily doses for quarters assigned to the Selective (1.72 (95% CI=1.55-1.90)) group was lower than for the Blanket (2.38 (95% CI=2.17-2.60)) group (p=0.005). The percentage of cows retreated for clinical mastitis did not differ between the Selective (21.7 (95% CI=10.5-25.9)%) and Blanket (26.1 (95% CI=20.9-31.3)%) groups (p=0.13), and there was no difference between groups in the hazard that cows would be retreated within 60 days of enrolment (hazard ratio=0.82 (95% CI=0.39-1.69); p=0.59). Compliance with the treatment protocols was higher amongst quarters assigned to the Selective (199/233; 85.4%) compared with the Blanket (171/249; 68.7%) group (p<0.001), and varied between farms from 64-94%. The overall agreement between results from on-farm and laboratory culture was 188/331 (56.9%; kappa=0.31; p<0.001), but varied between farms from 44.7-88.2% (p<0.001). Use of on-farm culture with selective antimicrobial therapy resulted in approximately 25% lower antimicrobial usage, but was not associated with an increase in the proportion of cows retreated for clinical mastitis. This study has demonstrated that on-farm culture and selective therapy based on culture results can be implemented on-farm. However, farms varied in their implementation of both the treatment protocols and microbiology procedures. Where such systems are to be used on-farm, specific training and on-going monitoring is required.

  3. A model based bayesian solution for characterization of complex damage scenarios in aerospace composite structures.

    PubMed

    Reed, H; Leckey, Cara A C; Dick, A; Harvey, G; Dobson, J

    2018-01-01

    Ultrasonic damage detection and characterization is commonly used in nondestructive evaluation (NDE) of aerospace composite components. In recent years there has been an increased development of guided wave based methods. In real materials and structures, these dispersive waves result in complicated behavior in the presence of complex damage scenarios. Model-based characterization methods utilize accurate three dimensional finite element models (FEMs) of guided wave interaction with realistic damage scenarios to aid in defect identification and classification. This work describes an inverse solution for realistic composite damage characterization by comparing the wavenumber-frequency spectra of experimental and simulated ultrasonic inspections. The composite laminate material properties are first verified through a Bayesian solution (Markov chain Monte Carlo), enabling uncertainty quantification surrounding the characterization. A study is undertaken to assess the efficacy of the proposed damage model and comparative metrics between the experimental and simulated output. The FEM is then parameterized with a damage model capable of describing the typical complex damage created by impact events in composites. The damage is characterized through a transdimensional Markov chain Monte Carlo solution, enabling a flexible damage model capable of adapting to the complex damage geometry investigated here. The posterior probability distributions of the individual delamination petals as well as the overall envelope of the damage site are determined. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  5. Mechanical design

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Design concepts for a 1000 mw thermal stationary power plant employing the UF6 fueled gas core breeder reactor are examined. Three design combinations-gaseous UF6 core with a solid matrix blanket, gaseous UF6 core with a liquid blanket, and gaseous UF6 core with a circulating blanket were considered. Results show the gaseous UF6 core with a circulating blanket was best suited to the power plant concept.

  6. Storing and Deploying Solar Panels

    NASA Technical Reports Server (NTRS)

    Browning, D. L.; Stocker, H. M.; Kleidon, E. H.

    1982-01-01

    Like upward-drawn window shades, solar blankets are unfurled to length of 89m, almost filling opening in 95.59-meter-square frame. When frame is completely assembled, solar blankets are pulled from canisters, one by one by electric motor. A Thin cushion sheet is rolled up with each blanket to cushion solar cells. Sheet is taken up on roller as blanket is unfurled. Unrolling proceeds automatically.

  7. Magnetohydrodynamic Heat Transfer Research Related to the Design of Fusion Blankets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barleon, Leopold; Burr, Ulrich; Mack, Klaus Juergen

    2001-03-15

    Lithium or any lithium alloy like the lithium lead alloy Pb-17Li is an attractive breeder material used in blankets of fusion power reactors because it allows the breeding of tritium and, in the case of self-cooled blankets, the transfer of the heat generated within the liquid metal and the walls of the cooling ducts to an external heat exchanger. Nevertheless, this type of liquid-metal-cooled blanket, called a self-cooled blanket, requires specific design of the coolant ducts, because the interaction of the circulating fluid and the plasma-confining magnetic fields causes magnetohydrodynamic (MHD) effects, yielding completely different flow patterns compared to ordinarymore » hydrodynamics (OHD) and pressure drops significantly higher than there. In contrast to OHD, MHD flows depend strongly on the electrical properties of the wall. Also, MHD flows reveal anisotropic turbulence behavior and are quite sensitive to obstacles exposed to the fluid flow.A comprehensive study of the heat transfer characteristics of free and forced convective MHD flows at fusion-relevant conditions is conducted. The general ideas of the analytical and numerical models to describe MHD heat transfer phenomena in this parameter regime are discussed. The MHD laboratory being installed, the experimental program established, and the experiments on heat transfer of free and forced convective flow being conducted are described. The theoretical results are compared to the results of a series of experiments in forced and free convective MHD flows with different wall properties, such as electrically insulating as well as electric conducting ducts. Based on this knowledge, methods to improve the heat transfer by means of electromagnetic/mechanic turbulence promoters (TPs) or sophisticated, arranged electrically conducting walls are discussed, experimental results are shown, and a cost-benefit analysis related to these methods is performed. Nevertheless, a few experimental results obtained should be highlighted:1. The heat flux removable in rectangular electrically conducting ducts at walls parallel to the magnetic field is by a factor of 2 higher than in the slug flow model previously used in design calculations. Conditions for which this heat transfer enhancement is attainable are presented. The measured dimensionless pressure gradient coincides with the theoretical one and is constant throughout the whole Reynolds number regime investigated (Re = 10{sup 3} {yields} 10{sup 5}), although the flow turns from laminar to turbulent. The use of electromagnetic TPs close to the heated wall leads to nonmeasurable increase of the heat transfer in the same Re regime as long as they do not lead to an interaction with the wall adjacent boundary layers.2. Mechanical TPs used in an electrically insulated rectangular duct improved the heat transfer up to seven times compared to slug flow, but the pressure drop can increase also up to 300%. In a cost-benefit analysis, the advantageous parameter regime for applying this method is determined.3. Experiments performed in a flat box both in a vertical and a horizontal arrangement within a horizontal magnetic field show the expected increase of damping of the fluid motion with increasing Hartmann number M. At high M, buoyant convection will be completely suppressed in the horizontal case. In the vertical setup, the fluid motion is reduced to one large vortex leading to a decreasing heat transfer between heated and cooled plate to pure heat conduction.From an analysis of the experimental and theoretical results, general design criteria are derived for the orientation and shape of the first wall coolant ducts of self-cooled liquid metal blankets. Methods to generate additional turbulence within the flow, which can improve the heat transfer further are elaborated.« less

  8. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    PubMed Central

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  9. A Modularized Efficient Framework for Non-Markov Time Series Estimation

    NASA Astrophysics Data System (ADS)

    Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.

    2018-06-01

    We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.

  10. Allele Age Under Non-Classical Assumptions is Clarified by an Exact Computational Markov Chain Approach.

    PubMed

    De Sanctis, Bianca; Krukov, Ivan; de Koning, A P Jason

    2017-09-19

    Determination of the age of an allele based on its population frequency is a well-studied problem in population genetics, for which a variety of approximations have been proposed. We present a new result that, surprisingly, allows the expectation and variance of allele age to be computed exactly (within machine precision) for any finite absorbing Markov chain model in a matter of seconds. This approach makes none of the classical assumptions (e.g., weak selection, reversibility, infinite sites), exploits modern sparse linear algebra techniques, integrates over all sample paths, and is rapidly computable for Wright-Fisher populations up to N e  = 100,000. With this approach, we study the joint effect of recurrent mutation, dominance, and selection, and demonstrate new examples of "selective strolls" where the classical symmetry of allele age with respect to selection is violated by weakly selected alleles that are older than neutral alleles at the same frequency. We also show evidence for a strong age imbalance, where rare deleterious alleles are expected to be substantially older than advantageous alleles observed at the same frequency when population-scaled mutation rates are large. These results highlight the under-appreciated utility of computational methods for the direct analysis of Markov chain models in population genetics.

  11. Markov-chain model of classified atomistic transition states for discrete kinetic Monte Carlo simulations.

    PubMed

    Numazawa, Satoshi; Smith, Roger

    2011-10-01

    Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.

  12. Radiative transfer calculated from a Markov chain formalism

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.; House, L. L.

    1978-01-01

    The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.

  13. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  14. Blackjack

    DTIC Science & Technology

    2012-05-01

    astar (C++) path finding algorithms.  bwaves (Fortran) simulation of blast waves in 3D transonic transient laminar viscous flow.  bzip2 (C) in...search based on Profile Hidden Markov Models.  lbm (C) implementation of Lattice Boltzman Method for simulation of incompressible fluids in 3D...to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE

  15. Infrared Ship Target Segmentation Based on Spatial Information Improved FCM.

    PubMed

    Bai, Xiangzhi; Chen, Zhiguo; Zhang, Yu; Liu, Zhaoying; Lu, Yi

    2016-12-01

    Segmentation of infrared (IR) ship images is always a challenging task, because of the intensity inhomogeneity and noise. The fuzzy C-means (FCM) clustering is a classical method widely used in image segmentation. However, it has some shortcomings, like not considering the spatial information or being sensitive to noise. In this paper, an improved FCM method based on the spatial information is proposed for IR ship target segmentation. The improvements include two parts: 1) adding the nonlocal spatial information based on the ship target and 2) using the spatial shape information of the contour of the ship target to refine the local spatial constraint by Markov random field. In addition, the results of K -means are used to initialize the improved FCM method. Experimental results show that the improved method is effective and performs better than the existing methods, including the existing FCM methods, for segmentation of the IR ship images.

  16. Analysis and optimization of minor actinides transmutation blankets with regards to neutron and gamma sources

    NASA Astrophysics Data System (ADS)

    Kooymana, Timothée; Buiron, Laurent; Rimpault, Gérald

    2017-09-01

    Heterogeneous loading of minor actinides in radial blankets is a potential solution to implement minor actinides transmutation in fast reactors. However, to compensate for the lower flux level experienced by the blankets, the fraction of minor actinides to be loaded in the blankets must be increased to maintain acceptable performances. This severely increases the decay heat and neutron source of the blanket assemblies, both before and after irradiation, by more than an order of magnitude in the case of neutron source for instance. We propose here to implement an optimization methodology of the blankets design with regards to various parameters such as the local spectrum or the mass to be loaded, with the objective of minimizing the final neutron source of the spent assembly while maximizing the transmutation performances of the blankets. In a first stage, an analysis of the various contributors to long and short term neutron and gamma source is carried out while in a second stage, relevant estimators are designed for use in the effective optimization process, which is done in the last step. A comparison with core calculations is finally done for completeness and validation purposes. It is found that the use of a moderated spectrum in the blankets can be beneficial in terms of final neutron and gamma source without impacting minor actinides transmutation performances compared to more energetic spectrum that could be achieved using metallic fuel for instance. It is also confirmed that, if possible, the use of hydrides as moderating material in the blankets is a promising option to limit the total minor actinides inventory in the fuel cycle. If not, it appears that focus should be put upon an increased residence time for the blankets rather than an increase in the acceptable neutron source for handling and reprocessing.

  17. HIV Migration Between Blood and Cerebrospinal Fluid or Semen Over Time

    PubMed Central

    Chaillon, Antoine; Gianella, Sara; Wertheim, Joel O.; Richman, Douglas D.; Mehta, Sanjay R.; Smith, David M.

    2014-01-01

    Previous studies reported associations between neuropathogenesis and human immunodeficiency virus (HIV) compartmentalization in cerebrospinal fluid (CSF) and between sexual transmission and human immunodeficiency virus type 1 (HIV) compartmentalization in semen. It remains unclear, however, how compartmentalization dynamics change over time. To address this, we used statistical methods and Bayesian phylogenetic approaches to reconstruct temporal dynamics of HIV migration between blood and CSF and between blood and the male genital tract. We investigated 11 HIV-infected individuals with paired semen and blood samples and 4 individuals with paired CSF and blood samples. Aligned partial HIV env sequences were analyzed by (1) phylogenetic reconstruction, using a Bayesian Markov-chain Monte Carlo approach; (2) evaluation of viral compartmentalization, using tree-based and distance-based methods; and (3) analysis of migration events, using a discrete Bayesian asymmetric phylogeographic approach of diffusion with Markov jump counts estimation. Finally, we evaluated potential correlates of viral gene flow across anatomical compartments. We observed bidirectional replenishment of viral compartments and asynchronous peaks of viral migration from and to blood over time, suggesting that disruption of viral compartment is transient and directionally selected. These findings imply that viral subpopulations in anatomical sites are an active part of the whole viral population and that compartmental reservoirs could have implications in future eradication studies. PMID:24302756

  18. Kullback-Leibler Divergence-Based Differential Evolution Markov Chain Filter for Global Localization of Mobile Robots.

    PubMed

    Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores

    2015-09-16

    One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot's pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area.

  19. Estimation of the biphasic property in a female's menstrual cycle from cutaneous temperature measured during sleep.

    PubMed

    Chen, Wenxi; Kitazawa, Masumi; Togawa, Tatsuo

    2009-09-01

    This paper proposes a method to estimate a woman's menstrual cycle based on the hidden Markov model (HMM). A tiny device was developed that attaches around the abdominal region to measure cutaneous temperature at 10-min intervals during sleep. The measured temperature data were encoded as a two-dimensional image (QR code, i.e., quick response code) and displayed in the LCD window of the device. A mobile phone captured the QR code image, decoded the information and transmitted the data to a database server. The collected data were analyzed by three steps to estimate the biphasic temperature property in a menstrual cycle. The key step was an HMM-based step between preprocessing and postprocessing. A discrete Markov model, with two hidden phases, was assumed to represent higher- and lower-temperature phases during a menstrual cycle. The proposed method was verified by the data collected from 30 female participants, aged from 14 to 46, over six consecutive months. By comparing the estimated results with individual records from the participants, 71.6% of 190 menstrual cycles were correctly estimated. The sensitivity and positive predictability were 91.8 and 96.6%, respectively. This objective evaluation provides a promising approach for managing premenstrual syndrome and birth control.

  20. Kullback-Leibler Divergence-Based Differential Evolution Markov Chain Filter for Global Localization of Mobile Robots

    PubMed Central

    Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores

    2015-01-01

    One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot’s pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area. PMID:26389914

  1. Spacecraft compartment venting

    NASA Astrophysics Data System (ADS)

    Scialdone, John J.

    1998-10-01

    At various times, concerns have been expressed that rapid decompressions of compartments of gas pockets and thermal blankets during spacecraft launches may have caused pressure differentials across their walls sufficient to cause minor structural failures, separations of adhesively-joined parts, ballooning, and flapping of blankets. This paper presents a close form equation expressing the expected pressure differentials across the walls of a compartment as a function of the external to the volume pressure drops, the pressure at which the rates occur and the vent capability of the compartment. The pressure profiles measured inside the shrouds of several spacecraft propelled by several vehicles and some profiles obtained from ground vacuum systems have been included. The equation can be used to design the appropriate vent, which will preclude excessive pressure differentials. Precautions and needed approaches for the evaluations of the expected pressures have been indicated. Methods to make a rapid assessment of the response of the compartment to rapid external pressure drops have been discussed. These are based on the evaluation of the compartment vent flow conductance, the volume and the length of time during which the rapid pressure drop occurs.

  2. Spacecraft Compartment Venting

    NASA Technical Reports Server (NTRS)

    Scialdone, John J.

    1998-01-01

    At various time concerns have been expressed that rapid decompressions of compartments of gas pockets and thermal blankets during spacecraft launches may have caused pressure differentials across their walls sufficient to cause minor structural failures, separations of adhesively-joined parts, ballooning, and flapping of blankets. This paper presents a close form equation expressing the expected pressure differentials across the walls of a compartment as a function of the external to the volume pressure drops, the pressure at which the rates occur and the vent capability of the compartment. The pressure profiles measured inside the shrouds of several spacecraft propelled by several vehicles and some profiles obtained from ground vacuum systems have been included. The equation can be used to design the appropriate vent, which will preclude excessive pressure differentials. Precautions and needed approaches for the evaluations of the expected pressures have been indicated. Methods to make a rapid assessment of the response of the compartment to rapid external pressure drops have been discussed. These are based on the evaluation of the compartment vent flow conductance, the volume and the length of time during which the rapid pressure drop occurs.

  3. Two Person Zero-Sum Semi-Markov Games with Unknown Holding Times Distribution on One Side: A Discounted Payoff Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minjarez-Sosa, J. Adolfo, E-mail: aminjare@gauss.mat.uson.mx; Luque-Vasquez, Fernando

    This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies.

  4. Theory and Applications of Weakly Interacting Markov Processes

    DTIC Science & Technology

    2018-02-03

    Moderate deviation principles for stochastic dynamical systems. Boston University, Math Colloquium, March 27, 2015. • Moderate Deviation Principles for...Markov chain approximation method. Submitted. [8] E. Bayraktar and M. Ludkovski. Optimal trade execution in illiquid markets. Math . Finance, 21(4):681...701, 2011. [9] E. Bayraktar and M. Ludkovski. Liquidation in limit order books with controlled intensity. Math . Finance, 24(4):627–650, 2014. [10] P.D

  5. Evaluation of selective dry cow treatment following on-farm culture: risk of postcalving intramammary infection and clinical mastitis in the subsequent lactation.

    PubMed

    Cameron, M; McKenna, S L; MacDonald, K A; Dohoo, I R; Roy, J P; Keefe, G P

    2014-01-01

    The objective of the study was to evaluate the utility of a Petrifilm-based on-farm culture system when used to make selective antimicrobial treatment decisions on low somatic cell count cows (<200,000 cells/mL) at drying off. A total of 729 cows from 16 commercial dairy herds with a low bulk tank somatic cell count (<250,000 cells/mL) were randomly assigned to receive either blanket dry cow therapy (DCT) or Petrifilm-based selective DCT. Cows belonging to the blanket DCT group were infused with a commercial dry cow antimicrobial product and an internal teat sealant (ITS) at drying off. Using composite milk samples collected on the day before drying off, cows in the selective DCT group were treated at drying off based on the results obtained by the Petrifilm on-farm culture system with DCT + ITS (Petrifilm culture positive), or ITS alone (Petrifilm culture negative). Quarters of all cows were sampled for standard laboratory bacteriology on the day before drying off, at 3 to 4d in milk (DIM), at 5 to 18 DIM, and from the first case of clinical mastitis occurring within 120 DIM. Multilevel logistic regression was used to assess the effect of study group (blanket or selective DCT) and resulting dry cow treatment (DCT + ITS, or ITS alone) on the risk of intramammary infection (IMI) at calving and the risk of a first case of clinical mastitis between calving and 120 DIM. According to univariable analysis, no difference was observed between study groups with respect to quarter-level cure risk and new IMI risk over the dry period. Likewise, the risk of IMI at calving and the risk of clinical mastitis in the first 120 DIM was not different between quarters belonging to cows in the blanket DCT group and quarters belonging to cows in the selective DCT group. The results of this study indicate that selective DCT based on results obtained by the Petrifilm on-farm culture system achieved the same level of success with respect to treatment and prevention of IMI over the dry period as blanket DCT and did not affect the risk of clinical mastitis in the first 120 d of the subsequent lactation. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. 48 CFR 313.303 - Blanket purchase agreements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Blanket purchase agreements. 313.303 Section 313.303 Federal Acquisition Regulations System HEALTH AND HUMAN SERVICES....303 Blanket purchase agreements. ...

  7. Detecting memory and structure in human navigation patterns using Markov chain models of varying order.

    PubMed

    Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus

    2014-01-01

    One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work.

  8. Detecting Memory and Structure in Human Navigation Patterns Using Markov Chain Models of Varying Order

    PubMed Central

    Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus

    2014-01-01

    One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work. PMID:25013937

  9. SOLVING THE STAND-OFF PROBLEM FOR MAGNETIZED TARGET FUSION: PLASMA STREAMS AS DISPOSABLE ELECTRODES, PLUS A LOCAL SPHERICAL BLANKET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryutov, D D; Thio, Y F

    In a fusion reactor based on the Magnetized Target Fusion approach, the permanent power supply has to deliver currents up to a few mega-amperes to the target dropped into the reaction chamber. All the structures situated around the target will be destroyed after every pulse and have to be replaced at a frequency of 1 to 10 Hz. In this paper, an approach based on the use of spherical blanket surrounding the target, and pulsed plasma electrodes connecting the target to the power supply, is discussed. A brief physic analysis of the processes associated with creation of plasma electrodes ismore » discussed.« less

  10. Thermal Performance of Cryogenic Multilayer Insulation at Various Layer Spacings

    NASA Technical Reports Server (NTRS)

    Johnson, Wesley Louis

    2010-01-01

    Multilayer insulation (MLI) has been shown to be the best performing cryogenic insulation system at high vacuum (less that 10 (exp 3) torr), and is widely used on spaceflight vehicles. Over the past 50 years, many investigations into MLI have yielded a general understanding of the many variables that are associated with MLI. MLI has been shown to be a function of variables such as warm boundary temperature, the number of reflector layers, and the spacer material in between reflectors, the interstitial gas pressure and the interstitial gas. Since the conduction between reflectors increases with the thickness of the spacer material, yet the radiation heat transfer is inversely proportional to the number of layers, it stands to reason that the thermal performance of MLI is a function of the number of layers per thickness, or layer density. Empirical equations that were derived based on some of the early tests showed that the conduction term was proportional to the layer density to a power. This power depended on the material combination and was determined by empirical test data. Many authors have graphically shown such optimal layer density, but none have provided any data at such low densities, or any method of determining this density. Keller, Cunnington, and Glassford showed MLI thermal performance as a function of layer density of high layer densities, but they didn't show a minimal layer density or any data below the supposed optimal layer density. However, it was recently discovered that by manipulating the derived empirical equations and taking a derivative with respect to layer density yields a solution for on optimal layer density. Various manufacturers have begun manufacturing MLI at densities below the optimal density. They began this based on the theory that increasing the distance between layers lowered the conductive heat transfer and they had no limitations on volume. By modifying the circumference of these blankets, the layer density can easily be varied. The simplest method of determining the thermal performance of MLI at cryogenic temperature is by boil-off calorimetry. Several blankets were procured and tested at various layer densities at the Cryogenics Test Laboratory at Kennedy Space Center. The densities that the blankets were tested over covered a wide range of layer densities including the analytical minimum. Several of the blankets were tested at the same insulation thickness while changing the layer density (thus a different number of reflector layers). Optimizing the layer density of multilayer insulation systems for heat transfer would remove a layer density from the complex method of designing such insulation systems. Additional testing was performed at various warm boundary temperatures and pressures. The testing and analysis was performed to simplify the analysis of cryogenic thermal insulation systems. This research was funded by the National Aeronautics and Space Administration's Exploration Technology Development Program's Cryogenic Fluid Management Project

  11. Long-term morbidity, mortality, and economics of rheumatoid arthritis.

    PubMed

    Wong, J B; Ramey, D R; Singh, G

    2001-12-01

    To estimate the morbidity, mortality, and lifetime costs of care for rheumatoid arthritis (RA). We developed a Markov model based on the Arthritis, Rheumatism, and Aging Medical Information System Post-Marketing Surveillance Program cohort, involving 4,258 consecutively enrolled RA patients who were followed up for 17,085 patient-years. Markov states of health were based on drug treatment and Health Assessment Questionnaire scores. Costs were based on resource utilization, and utilities were based on visual analog scale-based general health scores. The cohort had a mean age of 57 years, 76.4% were women, and the mean duration of disease was 11.8 years. Compared with a life expectancy of 22.0 years for the general population, this cohort had a life expectancy of 18.6 years and 11.3 quality-adjusted life years. Lifetime direct medical care costs were estimated to be $93,296. Higher costs were associated with higher disability scores. A Markov model can be used to estimate lifelong morbidity, mortality, and costs associated with RA, providing a context in which to consider the potential value of new therapies for the disease.

  12. Comparison of forced-air warming systems with upper body blankets using a copper manikin of the human body.

    PubMed

    Bräuer, A; English, M J M; Steinmetz, N; Lorenz, N; Perl, T; Braun, U; Weyland, W

    2002-09-01

    Forced-air warming with upper body blankets has gained high acceptance as a measure for the prevention of intraoperative hypothermia. However, data on heat transfer with upper body blankets are not yet available. This study was conducted to determine the heat transfer efficacy of eight complete upper body warming systems and to gain more insight into the principles of forced-air warming. Heat transfer of forced-air warmers can be described as follows: Qdot;=h. DeltaT. A, where Qdot;= heat flux [W], h=heat exchange coefficient [W m-2 degrees C-1], DeltaT=temperature gradient between the blanket and surface [ degrees C], and A=covered area [m2]. We tested eight different forced-air warming systems: (1) Bair Hugger and upper body blanket (Augustine Medical Inc. Eden Prairie, MN); (2) Thermacare and upper body blanket (Gaymar Industries, Orchard Park, NY); (3) Thermacare (Gaymar Industries) with reusable Optisan upper body blanket (Willy Rüsch AG, Kernen, Germany); (4) WarmAir and upper body blanket (Cincinnati Sub-Zero Products, Cincinnati, OH); (5) Warm-Gard and single use upper body blanket (Luis Gibeck AB, Upplands Väsby, Sweden); (6) Warm-Gard and reusable upper body blanket (Luis Gibeck AB); (7) WarmTouch and CareDrape upper body blanket (Mallinckrodt Medical Inc., St. Luis, MO); and (8) WarmTouch and reusable MultiCover trade mark upper body blanket (Mallinckrodt Medical Inc.) on a previously validated copper manikin of the human body. Heat flux and surface temperature were measured with 11 calibrated heat flux transducers. Blanket temperature was measured using 11 thermocouples. The temperature gradient between the blanket and surface (DeltaT) was varied between -8 and +8 degrees C, and h was determined by linear regression analysis as the slope of DeltaT vs. heat flux. Mean DeltaT was determined for surface temperatures between 36 and 38 degrees C, as similar mean skin surface temperatures have been found in volunteers. The covered area was estimated to be 0.35 m2. Total heat flow from the blanket to the manikin was different for surface temperatures between 36 and 38 degrees C. At a surface temperature of 36 degrees C the heat flows were higher (4-26.6 W) than at surface temperatures of 38 degrees C (2.6-18.1 W). The highest total heat flow was delivered by the WarmTouch trade mark system with the CareDrape trade mark upper body blanket (18.1-26.6 W). The lowest total heat flow was delivered by the Warm-Gard system with the single use upper body blanket (2.6-4 W). The heat exchange coefficient varied between 15.1 and 36.2 W m-2 degrees C-1, and mean DeltaT varied between 0.5 and 3.3 degrees C. We found total heat flows of 2.6-26.6 W by forced-air warming systems with upper body blankets. However, the changes in heat balance by forced-air warming systems with upper body blankets are larger, as these systems are not only transferring heat to the body but are also reducing heat losses from the covered area to zero. Converting heat losses of approximately 37.8 W to heat gain, results in a 40.4-64.4 W change in heat balance. The differences between the systems result from different heat exchange coefficients and different mean temperature gradients. However, the combination of a high heat exchange coefficient with a high mean temperature gradient is rare. This fact offers some possibility to improve these systems.

  13. Adjoint-Based Implicit Uncertainty Analysis for Figures of Merit in a Laser Inertial Fusion Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, J E; Fratoni, M; Kramer, K J

    A primary purpose of computational models is to inform design decisions and, in order to make those decisions reliably, the confidence in the results of such models must be estimated. Monte Carlo neutron transport models are common tools for reactor designers. These types of models contain several sources of uncertainty that propagate onto the model predictions. Two uncertainties worthy of note are (1) experimental and evaluation uncertainties of nuclear data that inform all neutron transport models and (2) statistical counting precision, which all results of a Monte Carlo codes contain. Adjoint-based implicit uncertainty analyses allow for the consideration of anymore » number of uncertain input quantities and their effects upon the confidence of figures of merit with only a handful of forward and adjoint transport calculations. When considering a rich set of uncertain inputs, adjoint-based methods remain hundreds of times more computationally efficient than Direct Monte-Carlo methods. The LIFE (Laser Inertial Fusion Energy) engine is a concept being developed at Lawrence Livermore National Laboratory. Various options exist for the LIFE blanket, depending on the mission of the design. The depleted uranium hybrid LIFE blanket design strives to close the fission fuel cycle without enrichment or reprocessing, while simultaneously achieving high discharge burnups with reduced proliferation concerns. Neutron transport results that are central to the operation of the design are tritium production for fusion fuel, fission of fissile isotopes for energy multiplication, and production of fissile isotopes for sustained power. In previous work, explicit cross-sectional uncertainty analyses were performed for reaction rates related to the figures of merit for the depleted uranium hybrid LIFE blanket. Counting precision was also quantified for both the figures of merit themselves and the cross-sectional uncertainty estimates to gauge the validity of the analysis. All cross-sectional uncertainties were small (0.1-0.8%), bounded counting uncertainties, and were precise with regard to counting precision. Adjoint/importance distributions were generated for the same reaction rates. The current work leverages those adjoint distributions to transition from explicit sensitivities, in which the neutron flux is constrained, to implicit sensitivities, in which the neutron flux responds to input perturbations. This treatment vastly expands the set of data that contribute to uncertainties to produce larger, more physically accurate uncertainty estimates.« less

  14. Effect of Clustering Algorithm on Establishing Markov State Model for Molecular Dynamics Simulations.

    PubMed

    Li, Yan; Dong, Zigang

    2016-06-27

    Recently, the Markov state model has been applied for kinetic analysis of molecular dynamics simulations. However, discretization of the conformational space remains a primary challenge in model building, and it is not clear how the space decomposition by distinct clustering strategies exerts influence on the model output. In this work, different clustering algorithms are employed to partition the conformational space sampled in opening and closing of fatty acid binding protein 4 as well as inactivation and activation of the epidermal growth factor receptor. Various classifications are achieved, and Markov models are set up accordingly. On the basis of the models, the total net flux and transition rate are calculated between two distinct states. Our results indicate that geometric and kinetic clustering perform equally well. The construction and outcome of Markov models are heavily dependent on the data traits. Compared to other methods, a combination of Bayesian and hierarchical clustering is feasible in identification of metastable states.

  15. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  16. KSC-04pd0615

    NASA Image and Video Library

    2004-03-24

    KENNEDY SPACE CENTER, FLA. -- A closeup of the stitching being done on pieces of insulation blankets inside the ring that fits in the nose cap of Discovery. The blankets consist of layered, pure silica felt sandwiched between a layer of silica fabric (the hot side) and a layer of S-Glass fabric. The blankets are semi-rigid and can be made as large as 30 inches by 30 inches. The blanket is through-stitched with pure silica thread in a 1-inch grid pattern. After fabrication, the blanket is bonded directly to the vehicle structure and finally coated with a high purity silica coating that improves erosion resistance.

  17. Space Station Freedom solar array containment box mechanisms

    NASA Technical Reports Server (NTRS)

    Johnson, Mark E.; Haugen, Bert; Anderson, Grant

    1994-01-01

    Space Station Freedom will feature six large solar arrays, called solar array wings, built by Lockheed Missiles & Space Company under contract to Rockwell International, Rocketdyne Division. Solar cells are mounted on flexible substrate panels which are hinged together to form a 'blanket.' Each wing is comprised of two blankets supported by a central mast, producing approximately 32 kW of power at beginning-of-life. During launch, the blankets are fan-folded and compressed to 1.5 percent of their deployed length into containment boxes. This paper describes the main containment box mechanisms designed to protect, deploy, and retract the solar array blankets: the latch, blanket restraint, tension, and guidewire mechanisms.

  18. KSC-04pd0624

    NASA Image and Video Library

    2004-03-25

    KENNEDY SPACE CENTER, FLA. -- Damon Petty, with United Space Alliance, removes a piece of insulation blanket from an “oven” after heat cleaning. The blankets fit inside the nose cap of an orbiter. They consist of layered, pure silica felt sandwiched between a layer of silica fabric (the hot side) and a layer of S-Glass fabric. The blanket is through-stitched with pure silica thread in a 1-inch grid pattern. After fabrication, the blanket is bonded directly to the vehicle structure and finally coated with a high purity silica coating that improves erosion resistance. The blankets are semi-rigid and can be made as large as 30 inches by 30 inches.

  19. KSC-04pd0626

    NASA Image and Video Library

    2004-03-25

    KENNEDY SPACE CENTER, FLA. -- Damon Petty, with United Space Alliance, covers another insulation blanket in the “oven” prior to heat cleaning. The blankets fit inside the nose cap of an orbiter. They consist of layered, pure silica felt sandwiched between a layer of silica fabric (the hot side) and a layer of S-Glass fabric. The blanket is through-stitched with pure silica thread in a 1-inch grid pattern. After fabrication, the blanket is bonded directly to the vehicle structure and finally coated with a high purity silica coating that improves erosion resistance. The blankets are semi-rigid and can be made as large as 30 inches by 30 inches.

  20. KSC-04pd0623

    NASA Image and Video Library

    2004-03-25

    KENNEDY SPACE CENTER, FLA. -- Damon Petty, with United Space Alliance, places pieces of insulation blanket into an “oven” for heat cleaning. The blankets fit inside the nose cap of an orbiter. They consist of layered, pure silica felt sandwiched between a layer of silica fabric (the hot side) and a layer of S-Glass fabric. The blanket is through-stitched with pure silica thread in a 1-inch grid pattern. After fabrication, the blanket is bonded directly to the vehicle structure and finally coated with a high purity silica coating that improves erosion resistance. The blankets are semi-rigid and can be made as large as 30 inches by 30 inches.

Top