Decision analysis with cumulative prospect theory.
Bayoumi, A M; Redelmeier, D A
2000-01-01
Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.
Markov chain decision model for urinary incontinence procedures.
Kumar, Sameer; Ghildayal, Nidhi; Ghildayal, Neha
2017-03-13
Purpose Urinary incontinence (UI) is a common chronic health condition, a problem specifically among elderly women that impacts quality of life negatively. However, UI is usually viewed as likely result of old age, and as such is generally not evaluated or even managed appropriately. Many treatments are available to manage incontinence, such as bladder training and numerous surgical procedures such as Burch colposuspension and Sling for UI which have high success rates. The purpose of this paper is to analyze which of these popular surgical procedures for UI is effective. Design/methodology/approach This research employs randomized, prospective studies to obtain robust cost and utility data used in the Markov chain decision model for examining which of these surgical interventions is more effective in treating women with stress UI based on two measures: number of quality adjusted life years (QALY) and cost per QALY. Treeage Pro Healthcare software was employed in Markov decision analysis. Findings Results showed the Sling procedure is a more effective surgical intervention than the Burch. However, if a utility greater than certain utility value, for which both procedures are equally effective, is assigned to persistent incontinence, the Burch procedure is more effective than the Sling procedure. Originality/value This paper demonstrates the efficacy of a Markov chain decision modeling approach to study the comparative effectiveness analysis of available treatments for patients with UI, an important public health issue, widely prevalent among elderly women in developed and developing countries. This research also improves upon other analyses using a Markov chain decision modeling process to analyze various strategies for treating UI.
Williams, Claire; Lewsey, James D.; Mackay, Daniel F.; Briggs, Andrew H.
2016-01-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results. PMID:27698003
Williams, Claire; Lewsey, James D; Mackay, Daniel F; Briggs, Andrew H
2017-05-01
Modeling of clinical-effectiveness in a cost-effectiveness analysis typically involves some form of partitioned survival or Markov decision-analytic modeling. The health states progression-free, progression and death and the transitions between them are frequently of interest. With partitioned survival, progression is not modeled directly as a state; instead, time in that state is derived from the difference in area between the overall survival and the progression-free survival curves. With Markov decision-analytic modeling, a priori assumptions are often made with regard to the transitions rather than using the individual patient data directly to model them. This article compares a multi-state modeling survival regression approach to these two common methods. As a case study, we use a trial comparing rituximab in combination with fludarabine and cyclophosphamide v. fludarabine and cyclophosphamide alone for the first-line treatment of chronic lymphocytic leukemia. We calculated mean Life Years and QALYs that involved extrapolation of survival outcomes in the trial. We adapted an existing multi-state modeling approach to incorporate parametric distributions for transition hazards, to allow extrapolation. The comparison showed that, due to the different assumptions used in the different approaches, a discrepancy in results was evident. The partitioned survival and Markov decision-analytic modeling deemed the treatment cost-effective with ICERs of just over £16,000 and £13,000, respectively. However, the results with the multi-state modeling were less conclusive, with an ICER of just over £29,000. This work has illustrated that it is imperative to check whether assumptions are realistic, as different model choices can influence clinical and cost-effectiveness results.
The application of Markov decision process with penalty function in restaurant delivery robot
NASA Astrophysics Data System (ADS)
Wang, Yong; Hu, Zhen; Wang, Ying
2017-05-01
As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional Markov decision process path planning algorithm is not save, the robot is very close to the table and chairs. To solve this problem, this paper proposes the Markov Decision Process with a penalty term called MDPPT path planning algorithm according to the traditional Markov decision process (MDP). For MDP, if the restaurant delivery robot bumps into an obstacle, the reward it receives is part of the current status reward. For the MDPPT, the reward it receives not only the part of the current status but also a negative constant term. Simulation results show that the MDPPT algorithm can plan a more secure path.
Decentralized learning in Markov games.
Vrancx, Peter; Verbeeck, Katja; Nowé, Ann
2008-08-01
Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games--a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies.
Mo Zhou; Joseph Buongiorno
2011-01-01
Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...
Decentralized control of Markovian decision processes: Existence Sigma-admissable policies
NASA Technical Reports Server (NTRS)
Greenland, A.
1980-01-01
The problem of formulating and analyzing Markov decision models having decentralized information and decision patterns is examined. Included are basic examples as well as the mathematical preliminaries needed to understand Markov decision models and, further, to superimpose decentralized decision structures on them. The notion of a variance admissible policy for the model is introduced and it is proved that there exist (possibly nondeterministic) optional policies from the class of variance admissible policies. Directions for further research are explored.
Markov Chains for Investigating and Predicting Migration: A Case from Southwestern China
NASA Astrophysics Data System (ADS)
Qin, Bo; Wang, Yiyu; Xu, Haoming
2018-03-01
In order to accurately predict the population’s happiness, this paper conducted two demographic surveys on a new district of a city in western China, and carried out a dynamic analysis using related mathematical methods. This paper argues that the migration of migrants in the city will change the pattern of spatial distribution of human resources in the city and thus affect the social and economic development in all districts. The migration status of the population will change randomly with the passage of time, so it can be predicted and analyzed through the Markov process. The Markov process provides the local government and decision-making bureau a valid basis for the dynamic analysis of the mobility of migrants in the city as well as the ways for promoting happiness of local people’s lives.
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-01-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology. PMID:23554632
Peng, Zhihang; Bao, Changjun; Zhao, Yang; Yi, Honggang; Xia, Letian; Yu, Hao; Shen, Hongbing; Chen, Feng
2010-05-01
This paper first applies the sequential cluster method to set up the classification standard of infectious disease incidence state based on the fact that there are many uncertainty characteristics in the incidence course. Then the paper presents a weighted Markov chain, a method which is used to predict the future incidence state. This method assumes the standardized self-coefficients as weights based on the special characteristics of infectious disease incidence being a dependent stochastic variable. It also analyzes the characteristics of infectious diseases incidence via the Markov chain Monte Carlo method to make the long-term benefit of decision optimal. Our method is successfully validated using existing incidents data of infectious diseases in Jiangsu Province. In summation, this paper proposes ways to improve the accuracy of the weighted Markov chain, specifically in the field of infection epidemiology.
Symbolic Heuristic Search for Factored Markov Decision Processes
NASA Technical Reports Server (NTRS)
Morris, Robert (Technical Monitor); Feng, Zheng-Zhu; Hansen, Eric A.
2003-01-01
We describe a planning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. State abstraction is used to avoid evaluating states individually. Forward search from a start state, guided by an admissible heuristic, is used to avoid evaluating all states. We combine these two approaches in a novel way that exploits symbolic model-checking techniques and demonstrates their usefulness for decision-theoretic planning.
Scalable approximate policies for Markov decision process models of hospital elective admissions.
Zhu, George; Lizotte, Dan; Hoey, Jesse
2014-05-01
To demonstrate the feasibility of using stochastic simulation methods for the solution of a large-scale Markov decision process model of on-line patient admissions scheduling. The problem of admissions scheduling is modeled as a Markov decision process in which the states represent numbers of patients using each of a number of resources. We investigate current state-of-the-art real time planning methods to compute solutions to this Markov decision process. Due to the complexity of the model, traditional model-based planners are limited in scalability since they require an explicit enumeration of the model dynamics. To overcome this challenge, we apply sample-based planners along with efficient simulation techniques that given an initial start state, generate an action on-demand while avoiding portions of the model that are irrelevant to the start state. We also propose a novel variant of a popular sample-based planner that is particularly well suited to the elective admissions problem. Results show that the stochastic simulation methods allow for the problem size to be scaled by a factor of almost 10 in the action space, and exponentially in the state space. We have demonstrated our approach on a problem with 81 actions, four specialities and four treatment patterns, and shown that we can generate solutions that are near-optimal in about 100s. Sample-based planners are a viable alternative to state-based planners for large Markov decision process models of elective admissions scheduling. Copyright © 2014 Elsevier B.V. All rights reserved.
Joseph Buongiorno
2001-01-01
Faustmann's formula gives the land value, or the forest value of land with trees, under deterministic assumptions regarding future stand growth and prices, over an infinite horizon. Markov decision process (MDP) models generalize Faustmann's approach by recognizing that future stand states and prices are known only as probabilistic distributions. The...
Operations and support cost modeling using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit
1989-01-01
Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.
Markov decision processes in natural resources management: observability and uncertainty
Williams, Byron K.
2015-01-01
The breadth and complexity of stochastic decision processes in natural resources presents a challenge to analysts who need to understand and use these approaches. The objective of this paper is to describe a class of decision processes that are germane to natural resources conservation and management, namely Markov decision processes, and to discuss applications and computing algorithms under different conditions of observability and uncertainty. A number of important similarities are developed in the framing and evaluation of different decision processes, which can be useful in their applications in natural resources management. The challenges attendant to partial observability are highlighted, and possible approaches for dealing with it are discussed.
Wali, Arvin R; Brandel, Michael G; Santiago-Dieppa, David R; Rennert, Robert C; Steinberg, Jeffrey A; Hirshman, Brian R; Murphy, James D; Khalessi, Alexander A
2018-05-01
OBJECTIVE Markov modeling is a clinical research technique that allows competing medical strategies to be mathematically assessed in order to identify the optimal allocation of health care resources. The authors present a review of the recently published neurosurgical literature that employs Markov modeling and provide a conceptual framework with which to evaluate, critique, and apply the findings generated from health economics research. METHODS The PubMed online database was searched to identify neurosurgical literature published from January 2010 to December 2017 that had utilized Markov modeling for neurosurgical cost-effectiveness studies. Included articles were then assessed with regard to year of publication, subspecialty of neurosurgery, decision analytical techniques utilized, and source information for model inputs. RESULTS A total of 55 articles utilizing Markov models were identified across a broad range of neurosurgical subspecialties. Sixty-five percent of the papers were published within the past 3 years alone. The majority of models derived health transition probabilities, health utilities, and cost information from previously published studies or publicly available information. Only 62% of the studies incorporated indirect costs. Ninety-three percent of the studies performed a 1-way or 2-way sensitivity analysis, and 67% performed a probabilistic sensitivity analysis. A review of the conceptual framework of Markov modeling and an explanation of the different terminology and methodology are provided. CONCLUSIONS As neurosurgeons continue to innovate and identify novel treatment strategies for patients, Markov modeling will allow for better characterization of the impact of these interventions on a patient and societal level. The aim of this work is to equip the neurosurgical readership with the tools to better understand, critique, and apply findings produced from cost-effectiveness research.
Olariu, Elena; Cadwell, Kevin K; Hancock, Elizabeth; Trueman, David; Chevrou-Severac, Helene
2017-01-01
Although Markov cohort models represent one of the most common forms of decision-analytic models used in health care decision-making, correct implementation of such models requires reliable estimation of transition probabilities. This study sought to identify consensus statements or guidelines that detail how such transition probability matrices should be estimated. A literature review was performed to identify relevant publications in the following databases: Medline, Embase, the Cochrane Library, and PubMed. Electronic searches were supplemented by manual-searches of health technology assessment (HTA) websites in Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and the UK. One reviewer assessed studies for eligibility. Of the 1,931 citations identified in the electronic searches, no studies met the inclusion criteria for full-text review, and no guidelines on transition probabilities in Markov models were identified. Manual-searching of the websites of HTA agencies identified ten guidelines on economic evaluations (Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and UK). All identified guidelines provided general guidance on how to develop economic models, but none provided guidance on the calculation of transition probabilities. One relevant publication was identified following review of the reference lists of HTA agency guidelines: the International Society for Pharmacoeconomics and Outcomes Research taskforce guidance. This provided limited guidance on the use of rates and probabilities. There is limited formal guidance available on the estimation of transition probabilities for use in decision-analytic models. Given the increasing importance of cost-effectiveness analysis in the decision-making processes of HTA bodies and other medical decision-makers, there is a need for additional guidance to inform a more consistent approach to decision-analytic modeling. Further research should be done to develop more detailed guidelines on the estimation of transition probabilities.
2017-03-23
Air Force Institute of Technology AFIT Scholar Theses and Dissertations 3-23-2017 Using Markov Decision Processes with Heterogeneous Queueing Systems... TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views expressed in...POLICIES THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology
A methodology for stochastic analysis of share prices as Markov chains with finite states.
Mettle, Felix Okoe; Quaye, Enoch Nii Boi; Laryea, Ravenhill Adjetey
2014-01-01
Price volatilities make stock investments risky, leaving investors in critical position when uncertain decision is made. To improve investor evaluation confidence on exchange markets, while not using time series methodology, we specify equity price change as a stochastic process assumed to possess Markov dependency with respective state transition probabilities matrices following the identified state pace (i.e. decrease, stable or increase). We established that identified states communicate, and that the chains are aperiodic and ergodic thus possessing limiting distributions. We developed a methodology for determining expected mean return time for stock price increases and also establish criteria for improving investment decision based on highest transition probabilities, lowest mean return time and highest limiting distributions. We further developed an R algorithm for running the methodology introduced. The established methodology is applied to selected equities from Ghana Stock Exchange weekly trading data.
de Geus, S W L; Evans, D B; Bliss, L A; Eskander, M F; Smith, J K; Wolff, R A; Miksad, R A; Weinstein, M C; Tseng, J F
2016-10-01
Neoadjuvant therapy is gaining acceptance as a valid treatment option for borderline resectable pancreatic cancer; however, its value for clearly resectable pancreatic cancer remains controversial. The aim of this study was to use a Markov decision analysis model, in the absence of adequately powered randomized trials, to compare the life expectancy (LE) and quality-adjusted life expectancy (QALE) of neoadjuvant therapy to conventional upfront surgical strategies in resectable pancreatic cancer patients. A Markov decision model was created to compare two strategies: attempted pancreatic resection followed by adjuvant chemoradiotherapy and neoadjuvant chemoradiotherapy followed by restaging with, if appropriate, attempted pancreatic resection. Data obtained through a comprehensive systematic search in PUBMED of the literature from 2000 to 2015 were used to estimate the probabilities used in the model. Deterministic and probabilistic sensitivity analyses were performed. Of the 786 potentially eligible studies identified, 22 studies met the inclusion criteria and were used to extract the probabilities used in the model. Base case analyses of the model showed a higher LE (32.2 vs. 26.7 months) and QALE (25.5 vs. 20.8 quality-adjusted life months) for patients in the neoadjuvant therapy arm compared to upfront surgery. Probabilistic sensitivity analyses for LE and QALE revealed that neoadjuvant therapy is favorable in 59% and 60% of the cases respectively. Although conceptual, these data suggest that neoadjuvant therapy offers substantial benefit in LE and QALE for resectable pancreatic cancer patients. These findings highlight the value of further prospective randomized trials comparing neoadjuvant therapy to conventional upfront surgical strategies. Copyright © 2016 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.
Markov Decision Process Measurement Model.
LaMar, Michelle M
2018-03-01
Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.
Markov Chain Estimation of Avian Seasonal Fecundity
To explore the consequences of modeling decisions on inference about avian seasonal fecundity we generalize previous Markov chain (MC) models of avian nest success to formulate two different MC models of avian seasonal fecundity that represent two different ways to model renestin...
MARKOV: A methodology for the solution of infinite time horizon MARKOV decision processes
Williams, B.K.
1988-01-01
Algorithms are described for determining optimal policies for finite state, finite action, infinite discrete time horizon Markov decision processes. Both value-improvement and policy-improvement techniques are used in the algorithms. Computing procedures are also described. The algorithms are appropriate for processes that are either finite or infinite, deterministic or stochastic, discounted or undiscounted, in any meaningful combination of these features. Computing procedures are described in terms of initial data processing, bound improvements, process reduction, and testing and solution. Application of the methodology is illustrated with an example involving natural resource management. Management implications of certain hypothesized relationships between mallard survival and harvest rates are addressed by applying the optimality procedures to mallard population models.
NASA Astrophysics Data System (ADS)
Kumar, Girish; Jain, Vipul; Gandhi, O. P.
2018-03-01
Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.
Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process
NASA Astrophysics Data System (ADS)
Migawa, Klaudiusz
2012-12-01
The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.
Modeling treatment of ischemic heart disease with partially observable Markov decision processes.
Hauskrecht, M; Fraser, H
1998-01-01
Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead they are very often dependent and interleaved over time, mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of Partially observable Markov decision processes (POMDPs) developed and used in operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In the paper, we show how the POMDP framework could be used to model and solve the problem of the management of patients with ischemic heart disease, and point out modeling advantages of the framework over standard decision formalisms.
Kharfan-Dabaja, M A; Pidala, J; Kumar, A; Terasawa, T; Djulbegovic, B
2012-09-01
Despite therapeutic advances, relapsed/refractory CLL, particularly after fludarabine-based regimens, remains a major challenge for which optimal therapy is undefined. No randomized comparative data exist to suggest the superiority of reduced-toxicity allogeneic hematopoietic cell transplantation (RT-allo-HCT) over conventional chemo-(immuno) therapy (CCIT). By using estimates from a systematic review and by meta-analysis of available published evidence, we constructed a Markov decision model to examine these competing modalities. Cohort analysis demonstrated superior outcome for RT-allo-HCT, with a 10-month overall life expectancy (and 6-month quality-adjusted life expectancy (QALE)) advantage over CCIT. Although the model was sensitive to changes in base-case assumptions and transition probabilities, RT-allo-HCT provided superior overall life expectancy through a range of values supported by the meta-analysis. QALE was superior for RT-allo-HCT compared with CCIT. This conclusion was sensitive to change in the anticipated state utility associated with the post-allogeneic HCT state; however, RT-allo-HCT remained the optimal strategy for values supported by existing literature. This analysis provides a quantitative comparison of outcomes between RT-allo-HCT and CCIT for relapsed/refractory CLL in the absence of randomized comparative trials. Confirmation of these findings requires a prospective randomized trial, which compares the most effective RT-allo-HCT and CCIT regimens for relapsed/refractory CLL.
Sangchan, Apichat; Chaiyakunapruk, Nathorn; Supakankunti, Siripen; Pugkhem, Ake; Mairiang, Pisaln
2014-01-01
Endoscopic biliary drainage using metal and plastic stent in unresectable hilar cholangiocarcinoma (HCA) is widely used but little is known about their cost-effectiveness. This study evaluated the cost-utility of endoscopic metal and plastic stent drainage in unresectable complex, Bismuth type II-IV, HCA patients. Decision analytic model, Markov model, was used to evaluate cost and quality-adjusted life year (QALY) of endoscopic biliary drainage in unresectable HCA. Costs of treatment and utilities of each Markov state were retrieved from hospital charges and unresectable HCA patients from tertiary care hospital in Thailand, respectively. Transition probabilities were derived from international literature. Base case analyses and sensitivity analyses were performed. Under the base-case analysis, metal stent is more effective but more expensive than plastic stent. An incremental cost per additional QALY gained is 192,650 baht (US$ 6,318). From probabilistic sensitivity analysis, at the willingness to pay threshold of one and three times GDP per capita or 158,000 baht (US$ 5,182) and 474,000 baht (US$ 15,546), the probability of metal stent being cost-effective is 26.4% and 99.8%, respectively. Based on the WHO recommendation regarding the cost-effectiveness threshold criteria, endoscopic metal stent drainage is cost-effective compared to plastic stent in unresectable complex HCA.
Mathematical Analysis of Vehicle Delivery Scale of Bike-Sharing Rental Nodes
NASA Astrophysics Data System (ADS)
Zhai, Y.; Liu, J.; Liu, L.
2018-04-01
Aiming at the lack of scientific and reasonable judgment of vehicles delivery scale and insufficient optimization of scheduling decision, based on features of the bike-sharing usage, this paper analyses the applicability of the discrete time and state of the Markov chain, and proves its properties to be irreducible, aperiodic and positive recurrent. Based on above analysis, the paper has reached to the conclusion that limit state (steady state) probability of the bike-sharing Markov chain only exists and is independent of the initial probability distribution. Then this paper analyses the difficulty of the transition probability matrix parameter statistics and the linear equations group solution in the traditional solving algorithm of the bike-sharing Markov chain. In order to improve the feasibility, this paper proposes a "virtual two-node vehicle scale solution" algorithm which considered the all the nodes beside the node to be solved as a virtual node, offered the transition probability matrix, steady state linear equations group and the computational methods related to the steady state scale, steady state arrival time and scheduling decision of the node to be solved. Finally, the paper evaluates the rationality and accuracy of the steady state probability of the proposed algorithm by comparing with the traditional algorithm. By solving the steady state scale of the nodes one by one, the proposed algorithm is proved to have strong feasibility because it lowers the level of computational difficulty and reduces the number of statistic, which will help the bike-sharing companies to optimize the scale and scheduling of nodes.
Hybrid Discrete-Continuous Markov Decision Processes
NASA Technical Reports Server (NTRS)
Feng, Zhengzhu; Dearden, Richard; Meuleau, Nicholas; Washington, Rich
2003-01-01
This paper proposes a Markov decision process (MDP) model that features both discrete and continuous state variables. We extend previous work by Boyan and Littman on the mono-dimensional time-dependent MDP to multiple dimensions. We present the principle of lazy discretization, and piecewise constant and linear approximations of the model. Having to deal with several continuous dimensions raises several new problems that require new solutions. In the (piecewise) linear case, we use techniques from partially- observable MDPs (POMDPS) to represent value functions as sets of linear functions attached to different partitions of the state space.
Dynamic Programming for Structured Continuous Markov Decision Problems
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu
2004-01-01
We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.
Planning treatment of ischemic heart disease with partially observable Markov decision processes.
Hauskrecht, M; Fraser, H
2000-03-01
Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead, they are very often dependent and interleaved over time. This is mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of partially observable Markov decision processes (POMDPs) developed and used in the operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In this paper, we show how the POMDP framework can be used to model and solve the problem of the management of patients with ischemic heart disease (IHD), and demonstrate the modeling advantages of the framework over standard decision formalisms.
Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K
2018-06-01
This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.
Policy Transfer via Markov Logic Networks
NASA Astrophysics Data System (ADS)
Torrey, Lisa; Shavlik, Jude
We propose using a statistical-relational model, the Markov Logic Network, for knowledge transfer in reinforcement learning. Our goal is to extract relational knowledge from a source task and use it to speed up learning in a related target task. We show that Markov Logic Networks are effective models for capturing both source-task Q-functions and source-task policies. We apply them via demonstration, which involves using them for decision making in an initial stage of the target task before continuing to learn. Through experiments in the RoboCup simulated-soccer domain, we show that transfer via Markov Logic Networks can significantly improve early performance in complex tasks, and that transferring policies is more effective than transferring Q-functions.
Use of Inverse Reinforcement Learning for Identity Prediction
NASA Technical Reports Server (NTRS)
Hayes, Roy; Bao, Jonathan; Beling, Peter; Horowitz, Barry
2011-01-01
We adopt Markov Decision Processes (MDP) to model sequential decision problems, which have the characteristic that the current decision made by a human decision maker has an uncertain impact on future opportunity. We hypothesize that the individuality of decision makers can be modeled as differences in the reward function under a common MDP model. A machine learning technique, Inverse Reinforcement Learning (IRL), was used to learn an individual's reward function based on limited observation of his or her decision choices. This work serves as an initial investigation for using IRL to analyze decision making, conducted through a human experiment in a cyber shopping environment. Specifically, the ability to determine the demographic identity of users is conducted through prediction analysis and supervised learning. The results show that IRL can be used to correctly identify participants, at a rate of 68% for gender and 66% for one of three college major categories.
Multiscale modelling and analysis of collective decision making in swarm robotics.
Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey
2014-01-01
We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable.
Metrics for Labeled Markov Systems
NASA Technical Reports Server (NTRS)
Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash
1999-01-01
Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.
Scholz, Stefan; Mittendorf, Thomas
2014-12-01
Rheumatoid arthritis (RA) is a chronic, inflammatory disease with severe effects on the functional ability of patients. Due to the prevalence of 0.5 to 1.0 percent in western countries, new treatment options are a major concern for decision makers with regard to their budget impact. In this context, cost-effectiveness analyses are a helpful tool to evaluate new treatment options for reimbursement schemes. To analyze and compare decision analytic modeling techniques and to explore their use in RA with regard to their advantages and shortcomings. A systematic literature review was conducted in PubMED and 58 studies reporting health economics decision models were analyzed with regard to the modeling technique used. From the 58 reviewed publications, we found 13 reporting decision tree-analysis, 25 (cohort) Markov models, 13 publications on individual sampling methods (ISM) and seven discrete event simulations (DES). Thereby 26 studies were identified as presenting independently developed models and 32 models as adoptions. The modeling techniques used were found to differ in their complexity and in the number of treatment options compared. Methodological features are presented in the article and a comprehensive overview of the cost-effectiveness estimates is given in Additional files 1 and 2. When compared to the other modeling techniques, ISM and DES have advantages in the coverage of patient heterogeneity and, additionally, DES is capable to model more complex treatment sequences and competing risks in RA-patients. Nevertheless, the availability of sufficient data is necessary to avoid assumptions in ISM and DES exercises, thereby enabling biased results. Due to the different settings, time frames and interventions in the reviewed publications, no direct comparison of modeling techniques was applicable. The results from other indications suggest that incremental cost-effective ratios (ICERs) do not differ significantly between Markov and DES models, but DES is able to report more outcome parameters. Given a sufficient data supply, DES is the modeling technique of choice when modeling cost-effectiveness in RA. Otherwise transparency on the data inputs is crucial for valid results and to inform decision makers about possible biases. With regard to ICERs, Markov models might provide similar estimates as more advanced modeling techniques.
A Holistic Approach to Networked Information Systems Design and Analysis
2016-04-15
attain quite substantial savings. 11. Optimal algorithms for energy harvesting in wireless networks. We use a Markov- decision-process (MDP) based...approach to obtain optimal policies for transmissions . The key advantage of our approach is that it holistically considers information and energy in a...Coding technique to minimize delays and the number of transmissions in Wireless Systems. As we approach an era of ubiquitous computing with information
Jones, Edmund; Masconi, Katya L.; Sweeting, Michael J.; Thompson, Simon G.; Powell, Janet T.
2018-01-01
Markov models are often used to evaluate the cost-effectiveness of new healthcare interventions but they are sometimes not flexible enough to allow accurate modeling or investigation of alternative scenarios and policies. A Markov model previously demonstrated that a one-off invitation to screening for abdominal aortic aneurysm (AAA) for men aged 65 y in the UK and subsequent follow-up of identified AAAs was likely to be highly cost-effective at thresholds commonly adopted in the UK (£20,000 to £30,000 per quality adjusted life-year). However, new evidence has emerged and the decision problem has evolved to include exploration of the circumstances under which AAA screening may be cost-effective, which the Markov model is not easily able to address. A new model to handle this more complex decision problem was needed, and the case of AAA screening thus provides an illustration of the relative merits of Markov models and discrete event simulation (DES) models. An individual-level DES model was built using the R programming language to reflect possible events and pathways of individuals invited to screening v. those not invited. The model was validated against key events and cost-effectiveness, as observed in a large, randomized trial. Different screening protocol scenarios were investigated to demonstrate the flexibility of the DES. The case of AAA screening highlights the benefits of DES, particularly in the context of screening studies.
Transition-Independent Decentralized Markov Decision Processes
NASA Technical Reports Server (NTRS)
Becker, Raphen; Silberstein, Shlomo; Lesser, Victor; Goldman, Claudia V.; Morris, Robert (Technical Monitor)
2003-01-01
There has been substantial progress with formal models for sequential decision making by individual agents using the Markov decision process (MDP). However, similar treatment of multi-agent systems is lacking. A recent complexity result, showing that solving decentralized MDPs is NEXP-hard, provides a partial explanation. To overcome this complexity barrier, we identify a general class of transition-independent decentralized MDPs that is widely applicable. The class consists of independent collaborating agents that are tied up by a global reward function that depends on both of their histories. We present a novel algorithm for solving this class of problems and examine its properties. The result is the first effective technique to solve optimally a class of decentralized MDPs. This lays the foundation for further work in this area on both exact and approximate solutions.
Multiscale Modelling and Analysis of Collective Decision Making in Swarm Robotics
Vigelius, Matthias; Meyer, Bernd; Pascoe, Geoffrey
2014-01-01
We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties. Our approach encompasses robot swarm experiments, microscopic and probabilistic macroscopic-discrete simulations as well as an analytic mathematical model. Following up on previous work, we identify the symmetry parameter, a measure of the progress of the swarm towards a decision, as a fundamental integrated swarm property and formulate its time evolution as a continuous-time Markov process. Contrary to previous work, which justified this approach only empirically and a posteriori, we justify it from first principles and derive hard limits on the parameter regime in which it is applicable. PMID:25369026
ERIC Educational Resources Information Center
Wollmer, Richard D.; Bond, Nicholas A.
Two computer-assisted instruction programs were written in electronics and trigonometry to test the Wollmer Markov Model for optimizing hierarchial learning; calibration samples totalling 110 students completed these programs. Since the model postulated that transfer effects would be a function of the amount of practice, half of the students were…
[Parameter of evidence-based medicine in health care economics].
Wasem, J; Siebert, U
1999-08-01
In the view of scarcity of resources, economic evaluations in health care, in which not only effects but also costs related to a medical intervention are examined and a incremental cost-outcome-ratio is build, are an important supplement to the program of evidence based medicine. Outcomes of a medical intervention can be measured by clinical effectiveness, quality-adjusted life years, and monetary evaluation of benefits. As far as costs are concerned, direct medical costs, direct non-medical costs and indirect costs have to be considered in an economic evaluation. Data can be used from primary studies or secondary analysis; metaanalysis for synthesizing of data may be adequate. For calculation of incremental cost-benefit-ratios, models of decision analysis (decision tree models, Markov-models) often are necessary. Methodological and ethical limits for application of the results of economic evaluation in resource allocation decision in health care have to be regarded: Economic evaluations and the calculation of cost-outcome-rations should only support decision making but cannot replace it.
Volk, Michael L; Lok, Anna S F; Ubel, Peter A; Vijan, Sandeep
2008-01-01
The utilitarian foundation of decision analysis limits its usefulness for many social policy decisions. In this study, the authors examine a method to incorporate competing ethical principles in a decision analysis of liver transplantation for a patient with acute liver failure (ALF). A Markov model was constructed to compare the benefit of transplantation for a patient with ALF versus the harm caused to other patients on the waiting list and to determine the lowest acceptable 5-y posttransplant survival for the ALF patient. The weighting of the ALF patient and other patients was then adjusted using a multiattribute variable incorporating utilitarianism, urgency, and other principles such as fair chances. In the base-case analysis, the strategy of transplanting the ALF patient resulted in a 0.8% increase in the risk of death and a utility loss of 7.8 quality-adjusted days of life for each of the other patients on the waiting list. These harms cumulatively outweighed the benefit of transplantation for an ALF patient having a posttransplant survival of less than 48% at 5 y. However, the threshold for an acceptable posttransplant survival for the ALF patient ranged from 25% to 56% at 5 y, depending on the ethical principles involved. The results of the decision analysis vary depending on the ethical perspective. This study demonstrates how competing ethical principles can be numerically incorporated in a decision analysis.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
Abe, James; Lobo, Jennifer M; Trifiletti, Daniel M; Showalter, Timothy N
2017-08-24
Despite the emergence of genomics-based risk prediction tools in oncology, there is not yet an established framework for communication of test results to cancer patients to support shared decision-making. We report findings from a stakeholder engagement program that aimed to develop a framework for using Markov models with individualized model inputs, including genomics-based estimates of cancer recurrence probability, to generate personalized decision aids for prostate cancer patients faced with radiation therapy treatment decisions after prostatectomy. We engaged a total of 22 stakeholders, including: prostate cancer patients, urological surgeons, radiation oncologists, genomic testing industry representatives, and biomedical informatics faculty. Slides were at each meeting to provide background information regarding the analytical framework. Participants were invited to provide feedback during the meeting, including revising the overall project aims. Stakeholder meeting content was reviewed and summarized by stakeholder group and by theme. The majority of stakeholder suggestions focused on aspects of decision aid design and formatting. Stakeholders were enthusiastic about the potential value of using decision analysis modeling with personalized model inputs for cancer recurrence risk, as well as competing risks from age and comorbidities, to generate a patient-centered tool to assist decision-making. Stakeholders did not view privacy considerations as a major barrier to the proposed decision aid program. A common theme was that decision aids should be portable across multiple platforms (electronic and paper), should allow for interaction by the user to adjust model inputs iteratively, and available to patients both before and during consult appointments. Emphasis was placed on the challenge of explaining the model's composite result of quality-adjusted life years. A range of stakeholders provided valuable insights regarding the design of a personalized decision aid program, based upon Markov modeling with individualized model inputs, to provide a patient-centered framework to support for genomic-based treatment decisions for cancer patients. The guidance provided by our stakeholders may be broadly applicable to the communication of genomic test results to patients in a patient-centered fashion that supports effective shared decision-making that represents a spectrum of personal factors such as age, medical comorbidities, and individual priorities and values.
Cao, Qi; Buskens, Erik; Feenstra, Talitha; Jaarsma, Tiny; Hillege, Hans; Postmus, Douwe
2016-01-01
Continuous-time state transition models may end up having large unwieldy structures when trying to represent all relevant stages of clinical disease processes by means of a standard Markov model. In such situations, a more parsimonious, and therefore easier-to-grasp, model of a patient's disease progression can often be obtained by assuming that the future state transitions do not depend only on the present state (Markov assumption) but also on the past through time since entry in the present state. Despite that these so-called semi-Markov models are still relatively straightforward to specify and implement, they are not yet routinely applied in health economic evaluation to assess the cost-effectiveness of alternative interventions. To facilitate a better understanding of this type of model among applied health economic analysts, the first part of this article provides a detailed discussion of what the semi-Markov model entails and how such models can be specified in an intuitive way by adopting an approach called vertical modeling. In the second part of the article, we use this approach to construct a semi-Markov model for assessing the long-term cost-effectiveness of 3 disease management programs for heart failure. Compared with a standard Markov model with the same disease states, our proposed semi-Markov model fitted the observed data much better. When subsequently extrapolating beyond the clinical trial period, these relatively large differences in goodness-of-fit translated into almost a doubling in mean total cost and a 60-d decrease in mean survival time when using the Markov model instead of the semi-Markov model. For the disease process considered in our case study, the semi-Markov model thus provided a sensible balance between model parsimoniousness and computational complexity. © The Author(s) 2015.
Smith, Wade P; Doctor, Jason; Meyer, Jürgen; Kalet, Ira J; Phillips, Mark H
2009-06-01
The prognosis of cancer patients treated with intensity-modulated radiation-therapy (IMRT) is inherently uncertain, depends on many decision variables, and requires that a physician balance competing objectives: maximum tumor control with minimal treatment complications. In order to better deal with the complex and multiple objective nature of the problem we have combined a prognostic probabilistic model with multi-attribute decision theory which incorporates patient preferences for outcomes. The response to IMRT for prostate cancer was modeled. A Bayesian network was used for prognosis for each treatment plan. Prognoses included predicting local tumor control, regional spread, distant metastases, and normal tissue complications resulting from treatment. A Markov model was constructed and used to calculate a quality-adjusted life-expectancy which aids in the multi-attribute decision process. Our method makes explicit the tradeoffs patients face between quality and quantity of life. This approach has advantages over current approaches because with our approach risks of health outcomes and patient preferences determine treatment decisions.
Markov logic network based complex event detection under uncertainty
NASA Astrophysics Data System (ADS)
Lu, Jingyang; Jia, Bin; Chen, Genshe; Chen, Hua-mei; Sullivan, Nichole; Pham, Khanh; Blasch, Erik
2018-05-01
In a cognitive reasoning system, the four-stage Observe-Orient-Decision-Act (OODA) reasoning loop is of interest. The OODA loop is essential for the situational awareness especially in heterogeneous data fusion. Cognitive reasoning for making decisions can take advantage of different formats of information such as symbolic observations, various real-world sensor readings, or the relationship between intelligent modalities. Markov Logic Network (MLN) provides mathematically sound technique in presenting and fusing data at multiple levels of abstraction, and across multiple intelligent sensors to conduct complex decision-making tasks. In this paper, a scenario about vehicle interaction is investigated, in which uncertainty is taken into consideration as no systematic approaches can perfectly characterize the complex event scenario. MLNs are applied to the terrestrial domain where the dynamic features and relationships among vehicles are captured through multiple sensors and information sources regarding the data uncertainty.
Monitoring as a partially observable decision problem
Paul L. Fackler; Robert G. Haight
2014-01-01
Monitoring is an important and costly activity in resource man-agement problems such as containing invasive species, protectingendangered species, preventing soil erosion, and regulating con-tracts for environmental services. Recent studies have viewedoptimal monitoring as a Partially Observable Markov Decision Pro-cess (POMDP), which provides a framework for...
Cost-effectiveness Analysis with Influence Diagrams.
Arias, M; Díez, F J
2015-01-01
Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.
VAMPnets for deep learning of molecular kinetics.
Mardt, Andreas; Pasquali, Luca; Wu, Hao; Noé, Frank
2018-01-02
There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.
ERIC Educational Resources Information Center
Yoda, Koji
1973-01-01
Develops models to systematically forecast the tendency of an educational administrator in charge of personnel selection processes to shift from one decision strategy to another under generally stable environmental conditions. Urges further research on these processes by educational planners. (JF)
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less
The application of Markov decision process in restaurant delivery robot
NASA Astrophysics Data System (ADS)
Wang, Yong; Hu, Zhen; Wang, Ying
2017-05-01
As the restaurant delivery robot is often in a dynamic and complex environment, including the chairs inadvertently moved to the channel and customers coming and going. The traditional path planning algorithm is not very ideal. To solve this problem, this paper proposes the Markov dynamic state immediate reward (MDR) path planning algorithm according to the traditional Markov decision process. First of all, it uses MDR to plan a global path, then navigates along this path. When the sensor detects there is no obstructions in front state, increase its immediate state reward value; when the sensor detects there is an obstacle in front, plan a global path that can avoid obstacle with the current position as the new starting point and reduce its state immediate reward value. This continues until the target is reached. When the robot learns for a period of time, it can avoid those places where obstacles are often present when planning the path. By analyzing the simulation experiment, the algorithm has achieved good results in the global path planning under the dynamic environment.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.
Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U
2015-05-01
The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
ERIC Educational Resources Information Center
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
Nonlinear Markov Control Processes and Games
2012-11-15
the analysis of a new class of stochastic games , nonlinear Markov games , as they arise as a ( competitive ) controlled version of nonlinear Markov... competitive interests) a nonlinear Markov game that we are investigating. I 0. :::tUt::JJt:.l.. I I t:t11VI;:, nonlinear Markov game , nonlinear Markov...corresponding stochastic game Γ+(T, h). In a slightly different setting one can assume that changes in a competitive control process occur as a
Intelligent data analysis to model and understand live cell time-lapse sequences.
Paterson, Allan; Ashtari, M; Ribé, D; Stenbeck, G; Tucker, A
2012-01-01
One important aspect of cellular function, which is at the basis of tissue homeostasis, is the delivery of proteins to their correct destinations. Significant advances in live cell microscopy have allowed tracking of these pathways by following the dynamics of fluorescently labelled proteins in living cells. This paper explores intelligent data analysis techniques to model the dynamic behavior of proteins in living cells as well as to classify different experimental conditions. We use a combination of decision tree classification and hidden Markov models. In particular, we introduce a novel approach to "align" hidden Markov models so that hidden states from different models can be cross-compared. Our models capture the dynamics of two experimental conditions accurately with a stable hidden state for control data and multiple (less stable) states for the experimental data recapitulating the behaviour of particle trajectories within live cell time-lapse data. In addition to having successfully developed an automated framework for the classification of protein transport dynamics from live cell time-lapse data our model allows us to understand the dynamics of a complex trafficking pathway in living cells in culture.
Enrollment Planning Using Computer Decision Model: A Case Study at Grambling State University.
ERIC Educational Resources Information Center
Ghosh, Kalyan; Lundy, Harold W.
Achieving enrollment goals continues to be a major administrative concern in higher education. Enrollment management can be assisted through the use of computerized planning and forecast models. Although commercially available Markov transition type curve fitting models have been developed and used, a microcomputer-based decision planning model…
A baker's dozen of new particle flows for nonlinear filters, Bayesian decisions and transport
NASA Astrophysics Data System (ADS)
Daum, Fred; Huang, Jim
2015-05-01
We describe a baker's dozen of new particle flows to compute Bayes' rule for nonlinear filters, Bayesian decisions and learning as well as transport. Several of these new flows were inspired by transport theory, but others were inspired by physics or statistics or Markov chain Monte Carlo methods.
Treatment strategies for pelvic organ prolapse: a cost-effectiveness analysis.
Hullfish, Kathie L; Trowbridge, Elisa R; Stukenborg, George J
2011-05-01
To compare the relative cost effectiveness of treatment decision alternatives for post-hysterectomy pelvic organ prolapse (POP). A Markov decision analysis model was used to assess and compare the relative cost effectiveness of expectant management, use of a pessary, and surgery for obtaining months of quality-adjusted life over 1 year. Sensitivity analysis was conducted to determine whether the results depended on specific estimates of patient utilities for pessary use, probabilities for complications and other events, and estimated costs. Only two treatment alternatives were found to be efficient choices: initial pessary use and vaginal reconstructive surgery (VRS). Pessary use (including patients that eventually transitioned to surgery) achieved 10.4 quality-adjusted months, at a cost of $10,000 per patient, while VRS obtained 11.4 quality-adjusted months, at $15,000 per patient. Sensitivity analysis demonstrated that these baseline results depended on several key estimates in the model. This analysis indicates that pessary use and VRS are the most cost-effective treatment alternatives for treating post-hysterectomy vaginal prolapse. Additional research is needed to standardize POP outcomes and complications, so that healthcare providers can best utilize cost information in balancing the risks and benefits of their treatment decisions.
Markov chains and semi-Markov models in time-to-event analysis.
Abner, Erin L; Charnigo, Richard J; Kryscio, Richard J
2013-10-25
A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields.
Markov chains and semi-Markov models in time-to-event analysis
Abner, Erin L.; Charnigo, Richard J.; Kryscio, Richard J.
2014-01-01
A variety of statistical methods are available to investigators for analysis of time-to-event data, often referred to as survival analysis. Kaplan-Meier estimation and Cox proportional hazards regression are commonly employed tools but are not appropriate for all studies, particularly in the presence of competing risks and when multiple or recurrent outcomes are of interest. Markov chain models can accommodate censored data, competing risks (informative censoring), multiple outcomes, recurrent outcomes, frailty, and non-constant survival probabilities. Markov chain models, though often overlooked by investigators in time-to-event analysis, have long been used in clinical studies and have widespread application in other fields. PMID:24818062
Accelerated decomposition techniques for large discounted Markov decision processes
NASA Astrophysics Data System (ADS)
Larach, Abdelhadi; Chafik, S.; Daoui, C.
2017-12-01
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorithm, which is a variant of Tarjan's algorithm that simultaneously finds the SCCs and their belonging levels. Second, a new definition of the restricted MDPs is presented to ameliorate some hierarchical solutions in discounted MDPs using value iteration (VI) algorithm based on a list of state-action successors. Finally, a robotic motion-planning example and the experiment results are presented to illustrate the benefit of the proposed decomposition algorithms.
Optimal Limited Contingency Planning
NASA Technical Reports Server (NTRS)
Meuleau, Nicolas; Smith, David E.
2003-01-01
For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.
NASA Astrophysics Data System (ADS)
Attaluri, Pavan K.; Chen, Zhengxin; Weerakoon, Aruna M.; Lu, Guoqing
Multiple criteria decision making (MCDM) has significant impact in bioinformatics. In the research reported here, we explore the integration of decision tree (DT) and Hidden Markov Model (HMM) for subtype prediction of human influenza A virus. Infection with influenza viruses continues to be an important public health problem. Viral strains of subtype H3N2 and H1N1 circulates in humans at least twice annually. The subtype detection depends mainly on the antigenic assay, which is time-consuming and not fully accurate. We have developed a Web system for accurate subtype detection of human influenza virus sequences. The preliminary experiment showed that this system is easy-to-use and powerful in identifying human influenza subtypes. Our next step is to examine the informative positions at the protein level and extend its current functionality to detect more subtypes. The web functions can be accessed at http://glee.ist.unomaha.edu/.
Cai, Y L; Zhang, S X; Yang, P C; Lin, Y
2016-06-01
Through cost-benefit analysis (CBA), cost-effectiveness analysis (CEA) and quantitative optimization analysis to understand the economic benefit and outcomes of strategy regarding preventing mother-to-child transmission (PMTCT) on hepatitis B virus. Based on the principle of Hepatitis B immunization decision analytic-Markov model, strategies on PMTCT and universal vaccination were compared. Related parameters of Shenzhen were introduced to the model, a birth cohort was set up as the study population in 2013. The net present value (NPV), benefit-cost ratio (BCR), incremental cost-effectiveness ratio (ICER) were calculated and the differences between CBA and CEA were compared. A decision tree was built as the decision analysis model for hepatitis B immunization. Three kinds of Markov models were used to simulate the outcomes after the implementation of vaccination program. The PMTCT strategy of Shenzhen showed a net-gain as 38 097.51 Yuan/per person in 2013, with BCR as 14.37. The universal vaccination strategy showed a net-gain as 37 083.03 Yuan/per person, with BCR as 12.07. Data showed that the PMTCT strategy was better than the universal vaccination one and would end with gaining more economic benefit. When comparing with the universal vaccination program, the PMTCT strategy would save 85 100.00 Yuan more on QALY gains for every person. The PMTCT strategy seemed more cost-effective compared with the one under universal vaccination program. In the CBA and CEA hepatitis B immunization programs, the immunization coverage rate and costs of hepatitis B related diseases were the most important influencing factors. Outcomes of joint-changes of all the parameters in CEA showed that PMTCT strategy was a more cost-effective. The PMTCT strategy gained more economic benefit and effects on health. However, the cost of PMTCT strategy was more than the universal vaccination program, thus it is important to pay attention to the process of PMTCT strategy and the universal vaccination program. CBA seemed suitable for strategy optimization while CEA was better for strategy evaluation. Hopefully, programs as combination of the above said two methods would facilitate the process of economic evaluation.
Dorazio, R.M.; Johnson, F.A.
2003-01-01
Bayesian inference and decision theory may be used in the solution of relatively complex problems of natural resource management, owing to recent advances in statistical theory and computing. In particular, Markov chain Monte Carlo algorithms provide a computational framework for fitting models of adequate complexity and for evaluating the expected consequences of alternative management actions. We illustrate these features using an example based on management of waterfowl habitat.
NASA Astrophysics Data System (ADS)
Rodriguez Lucatero, C.; Schaum, A.; Alarcon Ramos, L.; Bernal-Jaquez, R.
2014-07-01
In this study, the dynamics of decisions in complex networks subject to external fields are studied within a Markov process framework using nonlinear dynamical systems theory. A mathematical discrete-time model is derived using a set of basic assumptions regarding the convincement mechanisms associated with two competing opinions. The model is analyzed with respect to the multiplicity of critical points and the stability of extinction states. Sufficient conditions for extinction are derived in terms of the convincement probabilities and the maximum eigenvalues of the associated connectivity matrices. The influences of exogenous (e.g., mass media-based) effects on decision behavior are analyzed qualitatively. The current analysis predicts: (i) the presence of fixed-point multiplicity (with a maximum number of four different fixed points), multi-stability, and sensitivity with respect to the process parameters; and (ii) the bounded but significant impact of exogenous perturbations on the decision behavior. These predictions were verified using a set of numerical simulations based on a scale-free network topology.
Briggs, Andrew H; Ades, A E; Price, Martin J
2003-01-01
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.
Markov modeling and discrete event simulation in health care: a systematic comparison.
Standfield, Lachlan; Comans, Tracy; Scuffham, Paul
2014-04-01
The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.
Chen, Z M; Ji, S B; Shi, X L; Zhao, Y Y; Zhang, X F; Jin, H
2017-02-10
Objective: To evaluate the cost-utility of different hepatitis E vaccination strategies in women aged 15 to 49. Methods: The Markov-decision tree model was constructed to evaluate the cost-utility of three hepatitis E virus vaccination strategies. Parameters of the models were estimated on the basis of published studies and experience of experts. Both methods on sensitivity and threshold analysis were used to evaluate the uncertainties of the model. Results: Compared with non-vaccination group, strategy on post-screening vaccination with rate as 100%, could save 0.10 quality-adjusted life years per capital in the women from the societal perspectives. After implementation of screening program and with the vaccination rate reaching 100%, the incremental cost utility ratio (ICUR) of vaccination appeared as 5 651.89 and 6 385.33 Yuan/QALY, respectively. Vaccination post to the implementation of a screening program, the result showed better benefit than the vaccination rate of 100%. Results from the sensitivity analysis showed that both the cost of hepatitis E vaccine and the inoculation compliance rate presented significant effects. If the cost were lower than 191.56 Yuan (RMB) or the inoculation compliance rate lower than 0.23, the vaccination rate of 100% strategy was better than the post-screening vaccination strategy, otherwise the post-screening vaccination strategy appeared the optimal strategy. Conclusion: Post-screening vaccination for women aged 15 to 49 from social perspectives seemed the optimal one but it had to depend on the change of vaccine cost and the rate of inoculation compliance.
Pavement maintenance optimization model using Markov Decision Processes
NASA Astrophysics Data System (ADS)
Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.
2017-09-01
This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.
Health economic evaluation: important principles and methodology.
Rudmik, Luke; Drummond, Michael
2013-06-01
To discuss health economic evaluation and improve the understanding of common methodology. This article discusses the methodology for the following types of economic evaluations: cost-minimization, cost-effectiveness, cost-utility, cost-benefit, and economic modeling. Topics include health-state utility measures, the quality-adjusted life year (QALY), uncertainty analysis, discounting, decision tree analysis, and Markov modeling. Economic evaluation is the comparative analysis of alternative courses of action in terms of both their costs and consequences. With increasing health care expenditure and limited resources, it is important for physicians to consider the economic impact of their interventions. Understanding common methodology involved in health economic evaluation will improve critical appraisal of the literature and optimize future economic evaluations. Copyright © 2012 The American Laryngological, Rhinological and Otological Society, Inc.
Predicting explorative motor learning using decision-making and motor noise.
Chen, Xiuli; Mohr, Kieran; Galea, Joseph M
2017-04-01
A fundamental problem faced by humans is learning to select motor actions based on noisy sensory information and incomplete knowledge of the world. Recently, a number of authors have asked whether this type of motor learning problem might be very similar to a range of higher-level decision-making problems. If so, participant behaviour on a high-level decision-making task could be predictive of their performance during a motor learning task. To investigate this question, we studied performance during an explorative motor learning task and a decision-making task which had a similar underlying structure with the exception that it was not subject to motor (execution) noise. We also collected an independent measurement of each participant's level of motor noise. Our analysis showed that explorative motor learning and decision-making could be modelled as the (approximately) optimal solution to a Partially Observable Markov Decision Process bounded by noisy neural information processing. The model was able to predict participant performance in motor learning by using parameters estimated from the decision-making task and the separate motor noise measurement. This suggests that explorative motor learning can be formalised as a sequential decision-making process that is adjusted for motor noise, and raises interesting questions regarding the neural origin of explorative motor learning.
Predicting explorative motor learning using decision-making and motor noise
Galea, Joseph M.
2017-01-01
A fundamental problem faced by humans is learning to select motor actions based on noisy sensory information and incomplete knowledge of the world. Recently, a number of authors have asked whether this type of motor learning problem might be very similar to a range of higher-level decision-making problems. If so, participant behaviour on a high-level decision-making task could be predictive of their performance during a motor learning task. To investigate this question, we studied performance during an explorative motor learning task and a decision-making task which had a similar underlying structure with the exception that it was not subject to motor (execution) noise. We also collected an independent measurement of each participant’s level of motor noise. Our analysis showed that explorative motor learning and decision-making could be modelled as the (approximately) optimal solution to a Partially Observable Markov Decision Process bounded by noisy neural information processing. The model was able to predict participant performance in motor learning by using parameters estimated from the decision-making task and the separate motor noise measurement. This suggests that explorative motor learning can be formalised as a sequential decision-making process that is adjusted for motor noise, and raises interesting questions regarding the neural origin of explorative motor learning. PMID:28437451
Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F
2017-05-01
This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.
Koenig, Lane; Zhang, Qian; Austin, Matthew S; Demiralp, Berna; Fehring, Thomas K; Feng, Chaoling; Mather, Richard C; Nguyen, Jennifer T; Saavoss, Asha; Springer, Bryan D; Yates, Adolph J
2016-12-01
Demand for total hip arthroplasty (THA) is high and expected to continue to grow during the next decade. Although much of this growth includes working-aged patients, cost-effectiveness studies on THA have not fully incorporated the productivity effects from surgery. We asked: (1) What is the expected effect of THA on patients' employment and earnings? (2) How does accounting for these effects influence the cost-effectiveness of THA relative to nonsurgical treatment? Taking a societal perspective, we used a Markov model to assess the overall cost-effectiveness of THA compared with nonsurgical treatment. We estimated direct medical costs using Medicare claims data and indirect costs (employment status and worker earnings) using regression models and nonparametric simulations. For direct costs, we estimated average spending 1 year before and after surgery. Spending estimates included physician and related services, hospital inpatient and outpatient care, and postacute care. For indirect costs, we estimated the relationship between functional status and productivity, using data from the National Health Interview Survey and regression analysis. Using regression coefficients and patient survey data, we ran a nonparametric simulation to estimate productivity (probability of working multiplied by earnings if working minus the value of missed work days) before and after THA. We used the Australian Orthopaedic Association National Joint Replacement Registry to obtain revision rates because it contained osteoarthritis-specific THA revision rates by age and gender, which were unavailable in other registry reports. Other model assumptions were extracted from a previously published cost-effectiveness analysis that included a comprehensive literature review. We incorporated all parameter estimates into Markov models to assess THA effects on quality-adjusted life years and lifetime costs. We conducted threshold and sensitivity analyses on direct costs, indirect costs, and revision rates to assess the robustness of our Markov model results. Compared with nonsurgical treatments, THA increased average annual productivity of patients by USD 9503 (95% CI, USD 1446-USD 17,812). We found that THA increases average lifetime direct costs by USD 30,365, which were offset by USD 63,314 in lifetime savings from increased productivity. With net societal savings of USD 32,948 per patient, total lifetime societal savings were estimated at almost USD 10 billion from more than 300,000 THAs performed in the United States each year. Using a Markov model approach, we show that THA produces societal benefits that can offset the costs of THA. When comparing THA with other nonsurgical treatments, policymakers should consider the long-term benefits associated with increased productivity from surgery. Level III, economic and decision analysis.
Observation uncertainty in reversible Markov chains.
Metzner, Philipp; Weber, Marcus; Schütte, Christof
2010-09-01
In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .
Markov Tracking for Agent Coordination
NASA Technical Reports Server (NTRS)
Washington, Richard; Lau, Sonie (Technical Monitor)
1998-01-01
Partially observable Markov decision processes (POMDPs) axe an attractive representation for representing agent behavior, since they capture uncertainty in both the agent's state and its actions. However, finding an optimal policy for POMDPs in general is computationally difficult. In this paper we present Markov Tracking, a restricted problem of coordinating actions with an agent or process represented as a POMDP Because the actions coordinate with the agent rather than influence its behavior, the optimal solution to this problem can be computed locally and quickly. We also demonstrate the use of the technique on sequential POMDPs, which can be used to model a behavior that follows a linear, acyclic trajectory through a series of states. By imposing a "windowing" restriction that restricts the number of possible alternatives considered at any moment to a fixed size, a coordinating action can be calculated in constant time, making this amenable to coordination with complex agents.
Optimal management of colorectal liver metastases in older patients: a decision analysis
Yang, Simon; Alibhai, Shabbir MH; Kennedy, Erin D; El-Sedfy, Abraham; Dixon, Matthew; Coburn, Natalie; Kiss, Alex; Law, Calvin HL
2014-01-01
Background Comparative trials evaluating management strategies for colorectal cancer liver metastases (CLM) are lacking, especially for older patients. This study developed a decision-analytic model to quantify outcomes associated with treatment strategies for CLM in older patients. Methods A Markov-decision model was built to examine the effect on life expectancy (LE) and quality-adjusted life expectancy (QALE) for best supportive care (BSC), systemic chemotherapy (SC), radiofrequency ablation (RFA) and hepatic resection (HR). The baseline patient cohort assumptions included healthy 70-year-old CLM patients after a primary cancer resection. Event and transition probabilities and utilities were derived from a literature review. Deterministic and probabilistic sensitivity analyses were performed on all study parameters. Results In base case analysis, BSC, SC, RFA and HR yielded LEs of 11.9, 23.1, 34.8 and 37.0 months, and QALEs of 7.8, 13.2, 22.0 and 25.0 months, respectively. Model results were sensitive to age, comorbidity, length of model simulation and utility after HR. Probabilistic sensitivity analysis showed increasing preference for RFA over HR with increasing patient age. Conclusions HR may be optimal for healthy 70-year-old patients with CLM. In older patients with comorbidities, RFA may provide better LE and QALE. Treatment decisions in older cancer patients should account for patient age, comorbidities, local expertise and individual values. PMID:24961482
Moolenaar, Lobke M; Broekmans, Frank J M; van Disseldorp, Jeroen; Fauser, Bart C J M; Eijkemans, Marinus J C; Hompes, Peter G A; van der Veen, Fulco; Mol, Ben Willem J
2011-10-01
To compare the cost effectiveness of ovarian reserve testing in in vitro fertilization (IVF). A Markov decision model based on data from the literature and original patient data. Decision analytic framework. Computer-simulated cohort of subfertile women aged 20 to 45 years who are eligible for IVF. [1] No treatment, [2] up to three cycles of IVF limited to women under 41 years and no ovarian reserve testing, [3] up to three cycles of IVF with dose individualization of gonadotropins according to ovarian reserve, and [4] up to three cycles of IVF with ovarian reserve testing and exclusion of expected poor responders after the first cycle, with no treatment scenario as the reference scenario. Cumulative live birth over 1 year, total costs, and incremental cost-effectiveness ratios. The cumulative live birth was 9.0% in the no treatment scenario, 54.8% for scenario 2, 70.6% for scenario 3 and 51.9% for scenario 4. Absolute costs per woman for these scenarios were €0, €6,917, €6,678, and €5,892 for scenarios 1, 2, 3, and 4, respectively. Incremental cost-effectiveness ratios (ICER) for scenarios 2, 3, and 4 were €15,166, €10,837, and €13,743 per additional live birth. Sensitivity analysis showed the model to be robust over a wide range of values. Individualization of the follicle-stimulating hormone dose according to ovarian reserve is likely to be cost effective in women who are eligible for IVF, but this effectiveness needs to be confirmed in randomized clinical trials. Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
The economic impact of pig-associated parasitic zoonosis in Northern Lao PDR.
Choudhury, Adnan Ali Khan; Conlan, James V; Racloz, Vanessa Nadine; Reid, Simon Andrew; Blacksell, Stuart D; Fenwick, Stanley G; Thompson, Andrew R C; Khamlome, Boualam; Vongxay, Khamphouth; Whittaker, Maxine
2013-03-01
The parasitic zoonoses human cysticercosis (Taenia solium), taeniasis (other Taenia species) and trichinellosis (Trichinella species) are endemic in the Lao People's Democratic Republic (Lao PDR). This study was designed to quantify the economic burden pig-associated zoonotic disease pose in Lao PDR. In particular, the analysis included estimation of the losses in the pork industry as well as losses due to human illness and lost productivity. A Markov-probability based decision-tree model was chosen to form the basis of the calculations to estimate the economic and public health impacts of taeniasis, trichinellosis and cysticercosis. Two different decision trees were run simultaneously on the model's human cohort. A third decision tree simulated the potential impacts on pig production. The human capital method was used to estimate productivity loss. The results found varied significantly depending on the rate of hospitalisation due to neurocysticerosis. This study is the first systematic estimate of the economic impact of pig-associated zoonotic diseases in Lao PDR that demonstrates the significance of the diseases in that country.
NASA Technical Reports Server (NTRS)
Al-Jaar, Robert Y.; Desrochers, Alan A.
1989-01-01
The main objective of this research is to develop a generic modeling methodology with a flexible and modular framework to aid in the design and performance evaluation of integrated manufacturing systems using a unified model. After a thorough examination of the available modeling methods, the Petri Net approach was adopted. The concurrent and asynchronous nature of manufacturing systems are easily captured by Petri Net models. Three basic modules were developed: machine, buffer, and Decision Making Unit. The machine and buffer modules are used for modeling transfer lines and production networks. The Decision Making Unit models the functions of a computer node in a complex Decision Making Unit Architecture. The underlying model is a Generalized Stochastic Petri Net (GSPN) that can be used for performance evaluation and structural analysis. GSPN's were chosen because they help manage the complexity of modeling large manufacturing systems. There is no need to enumerate all the possible states of the Markov Chain since they are automatically generated from the GSPN model.
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Markov Random Fields, Stochastic Quantization and Image Analysis
1990-01-01
Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.
Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J
2018-07-01
Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE <25/30 and <27/30, and MoCA <22/30 and <26/30. Using Markov chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold <26/30 appeared to have the best true positive rate, whereas MMSE at threshold <25/30 appeared to have the best true negative rate. The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
A Markovian state-space framework for integrating flexibility into space system design decisions
NASA Astrophysics Data System (ADS)
Lafleur, Jarret M.
The past decades have seen the state of the art in aerospace system design progress from a scope of simple optimization to one including robustness, with the objective of permitting a single system to perform well even in off-nominal future environments. Integrating flexibility, or the capability to easily modify a system after it has been fielded in response to changing environments, into system design represents a further step forward. One challenge in accomplishing this rests in that the decision-maker must consider not only the present system design decision, but also sequential future design and operation decisions. Despite extensive interest in the topic, the state of the art in designing flexibility into aerospace systems, and particularly space systems, tends to be limited to analyses that are qualitative, deterministic, single-objective, and/or limited to consider a single future time period. To address these gaps, this thesis develops a stochastic, multi-objective, and multi-period framework for integrating flexibility into space system design decisions. Central to the framework are five steps. First, system configuration options are identified and costs of switching from one configuration to another are compiled into a cost transition matrix. Second, probabilities that demand on the system will transition from one mission to another are compiled into a mission demand Markov chain. Third, one performance matrix for each design objective is populated to describe how well the identified system configurations perform in each of the identified mission demand environments. The fourth step employs multi-period decision analysis techniques, including Markov decision processes from the field of operations research, to find efficient paths and policies a decision-maker may follow. The final step examines the implications of these paths and policies for the primary goal of informing initial system selection. Overall, this thesis unifies state-centric concepts of flexibility from economics and engineering literature with sequential decision-making techniques from operations research. The end objective of this thesis’ framework and its supporting tools is to enable selection of the next-generation space systems today, tailored to decision-maker budget and performance preferences, that will be best able to adapt and perform in a future of changing environments and requirements. Following extensive theoretical development, the framework and its steps are applied to space system planning problems of (1) DARPA-motivated multiple- or distributed-payload satellite selection and (2) NASA human space exploration architecture selection.
Kirsch, Florian
2016-12-01
Disease management programs (DMPs) for chronic diseases are being increasingly implemented worldwide. To present a systematic overview of the economic effects of DMPs with Markov models. The quality of the models is assessed, the method by which the DMP intervention is incorporated into the model is examined, and the differences in the structure and data used in the models are considered. A literature search was conducted; the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement was followed to ensure systematic selection of the articles. Study characteristics e.g. results, the intensity of the DMP and usual care, model design, time horizon, discount rates, utility measures, and cost-of-illness were extracted from the reviewed studies. Model quality was assessed by two researchers with two different appraisals: one proposed by Philips et al. (Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality asessment. Pharmacoeconomics 2006;24:355-71) and the other proposed by Caro et al. (Questionnaire to assess relevance and credibility of modeling studies for informing health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health 2014;17:174-82). A total of 16 studies (9 on chronic heart disease, 2 on asthma, and 5 on diabetes) met the inclusion criteria. Five studies reported cost savings and 11 studies reported additional costs. In the quality, the overall score of the models ranged from 39% to 65%, it ranged from 34% to 52%. Eleven models integrated effectiveness derived from a clinical trial or a meta-analysis of complete DMPs and only five models combined intervention effects from different sources into a DMP. The main limitations of the models are bad reporting practice and the variation in the selection of input parameters. Eleven of the 14 studies reported cost-effectiveness results of less than $30,000 per quality-adjusted life-year and the remaining two studies less than $30,000 per life-year gained. Nevertheless, if the reporting and selection of data problems are addressed, then Markov models should provide more reliable information for decision makers, because understanding under what circumstances a DMP is cost-effective is an important determinant of efficient resource allocation. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Junhua
2013-03-01
To cope with a large amount of data in current sensed environments, decision aid tools should provide their understanding of situations in a time-efficient manner, so there is an increasing need for real-time network security situation awareness and threat assessment. In this study, the state transition model of vulnerability in the network based on semi-Markov process is proposed at first. Once events are triggered by an attacker's action or system response, the current states of the vulnerabilities are known. Then we calculate the transition probabilities of the vulnerability from the current state to security failure state. Furthermore in order to improve accuracy of our algorithms, we adjust the probabilities that they exploit the vulnerability according to the attacker's skill level. In the light of the preconditions and post-conditions of vulnerabilities in the network, attack graph is built to visualize security situation in real time. Subsequently, we predict attack path, recognize attack intention and estimate the impact through analysis of attack graph. These help administrators to insight into intrusion steps, determine security state and assess threat. Finally testing in a network shows that this method is reasonable and feasible, and can undertake tremendous analysis task to facilitate administrators' work.
Density Control of Multi-Agent Systems with Safety Constraints: A Markov Chain Approach
NASA Astrophysics Data System (ADS)
Demirer, Nazli
The control of systems with autonomous mobile agents has been a point of interest recently, with many applications like surveillance, coverage, searching over an area with probabilistic target locations or exploring an area. In all of these applications, the main goal of the swarm is to distribute itself over an operational space to achieve mission objectives specified by the density of swarm. This research focuses on the problem of controlling the distribution of multi-agent systems considering a hierarchical control structure where the whole swarm coordination is achieved at the high-level and individual vehicle/agent control is managed at the low-level. High-level coordination algorithms uses macroscopic models that describes the collective behavior of the whole swarm and specify the agent motion commands, whose execution will lead to the desired swarm behavior. The low-level control laws execute the motion to follow these commands at the agent level. The main objective of this research is to develop high-level decision control policies and algorithms to achieve physically realizable commanding of the agents by imposing mission constraints on the distribution. We also make some connections with decentralized low-level motion control. This dissertation proposes a Markov chain based method to control the density distribution of the whole system where the implementation can be achieved in a decentralized manner with no communication between agents since establishing communication with large number of agents is highly challenging. The ultimate goal is to guide the overall density distribution of the system to a prescribed steady-state desired distribution while satisfying desired transition and safety constraints. Here, the desired distribution is determined based on the mission requirements, for example in the application of area search, the desired distribution should match closely with the probabilistic target locations. The proposed method is applicable for both systems with a single agent and systems with large number of agents due to the probabilistic nature, where the probability distribution of each agent's state evolves according to a finite-state and discrete-time Markov chain (MC). Hence, designing proper decision control policies requires numerically tractable solution methods for the synthesis of Markov chains. The synthesis problem has the form of a Linear Matrix Inequality Problem (LMI), with LMI formulation of the constraints. To this end, we propose convex necessary and sufficient conditions for safety constraints in Markov chains, which is a novel result in the Markov chain literature. In addition to LMI-based, offline, Markov matrix synthesis method, we also propose a QP-based, online, method to compute a time-varying Markov matrix based on the real-time density feedback. Both problems are convex optimization problems that can be solved in a reliable and tractable way, utilizing existing tools in the literature. A Low Earth Orbit (LEO) swarm simulations are presented to validate the effectiveness of the proposed algorithms. Another problem tackled as a part of this research is the generalization of the density control problem to autonomous mobile agents with two control modes: ON and OFF. Here, each mode consists of a (possibly overlapping) finite set of actions, that is, there exist a set of actions for the ON mode and another set for the OFF mode. We give formulation for a new Markov chain synthesis problem, with additional measurements for the state transitions, where a policy is designed to ensure desired safety and convergence properties for the underlying Markov chain.
Discrete event simulation: the preferred technique for health economic evaluations?
Caro, Jaime J; Möller, Jörgen; Getsios, Denis
2010-12-01
To argue that discrete event simulation should be preferred to cohort Markov models for economic evaluations in health care. The basis for the modeling techniques is reviewed. For many health-care decisions, existing data are insufficient to fully inform them, necessitating the use of modeling to estimate the consequences that are relevant to decision-makers. These models must reflect what is known about the problem at a level of detail sufficient to inform the questions. Oversimplification will result in estimates that are not only inaccurate, but potentially misleading. Markov cohort models, though currently popular, have so many limitations and inherent assumptions that they are inadequate to inform most health-care decisions. An event-based individual simulation offers an alternative much better suited to the problem. A properly designed discrete event simulation provides more accurate, relevant estimates without being computationally prohibitive. It does require more data and may be a challenge to convey transparently, but these are necessary trade-offs to provide meaningful and valid results. In our opinion, discrete event simulation should be the preferred technique for health economic evaluations today. © 2010, International Society for Pharmacoeconomics and Outcomes Research (ISPOR).
Bayesian analysis of non-homogeneous Markov chains: application to mental health data.
Sung, Minje; Soyer, Refik; Nhan, Nguyen
2007-07-10
In this paper we present a formal treatment of non-homogeneous Markov chains by introducing a hierarchical Bayesian framework. Our work is motivated by the analysis of correlated categorical data which arise in assessment of psychiatric treatment programs. In our development, we introduce a Markovian structure to describe the non-homogeneity of transition patterns. In doing so, we introduce a logistic regression set-up for Markov chains and incorporate covariates in our model. We present a Bayesian model using Markov chain Monte Carlo methods and develop inference procedures to address issues encountered in the analyses of data from psychiatric treatment programs. Our model and inference procedures are implemented to some real data from a psychiatric treatment study. Copyright 2006 John Wiley & Sons, Ltd.
Hatz, Maximilian H M; Leidl, Reiner; Yates, Nichola A; Stollenwerk, Björn
2014-04-01
Thrombosis inhibitors can be used to treat acute coronary syndromes (ACS). However, there are various alternative treatment strategies, of which some have been compared using health economic decision models. To assess the quality of health economic decision models comparing thrombosis inhibitors in patients with ACS undergoing percutaneous coronary intervention, and to identify areas for quality improvement. The literature databases MEDLINE, EMBASE, EconLit, National Health Service Economic Evaluation Database (NHS EED), Database of Abstracts of Reviews of Effects (DARE) and Health Technology Assessment (HTA). A review of the quality of health economic decision models was conducted by two independent reviewers, using the Philips checklist. Twenty-one relevant studies were identified. Differences were apparent regarding the model type (six decision trees, four Markov models, eight combinations, three undefined models), the model structure (types of events, Markov states) and the incorporation of data (efficacy, cost and utility data). Critical issues were the absence of particular events (e.g. thrombocytopenia, stroke) and questionable usage of utility values within some studies. As we restricted our search to health economic decision models comparing thrombosis inhibitors, interesting aspects related to the quality of studies of adjacent medical areas that compared stents or procedures could have been missed. This review identified areas where recommendations are indicated regarding the quality of future ACS decision models. For example, all critical events and relevant treatment options should be included. Models also need to allow for changing event probabilities to correctly reflect ACS and to incorporate appropriate, age-specific utility values and decrements when conducting cost-utility analyses.
NASA Astrophysics Data System (ADS)
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model's capability for simulating/predicting water resources.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Vaidya, Anil; Vaidya, Param; Both, Brigitte; Brew-Graves, Chris; Bulsara, Max; Vaidya, Jayant S
2017-08-17
The clinical effectiveness of targeted intraoperative radiotherapy (TARGIT-IORT) has been confirmed in the randomised TARGIT-A (targeted intraoperative radiotherapy-alone) trial to be similar to a several weeks' course of whole-breast external-beam radiation therapy (EBRT) in patients with early breast cancer. This study aims to determine the cost-effectiveness of TARGIT-IORT to inform policy decisions about its wider implementation. TARGIT-A randomised clinical trial (ISRCTN34086741) which compared TARGIT with traditional EBRT and found similar breast cancer control, particularly when TARGIT was given simultaneously with lumpectomy. Cost-utility analysis using decision analytic modelling by a Markov model. A cost-effectiveness Markov model was developed using TreeAge Pro V.2015. The decision analytic model compared two strategies of radiotherapy for breast cancer in a hypothetical cohort of patients with early breast cancer based on the published health state transition probability data from the TARGIT-A trial. Analysis was performed for UK setting and National Health Service (NHS) healthcare payer's perspective using NHS cost data and treatment outcomes were simulated for both strategies for a time horizon of 10 years. Model health state utilities were drawn from the published literature. Future costs and effects were discounted at the rate of 3.5%. To address uncertainty, one-way and probabilistic sensitivity analyses were performed. Quality-adjusted life-years (QALYs). In the base case analysis, TARGIT-IORT was a highly cost-effective strategy yielding health gain at a lower cost than its comparator EBRT. Discounted TARGIT-IORT and EBRT costs for the time horizon of 10 years were £12 455 and £13 280, respectively. TARGIT-IORT gained 0.18 incremental QALY as the discounted QALYs gained by TARGIT-IORT were 8.15 and by EBRT were 7.97 showing TARGIT-IORT as a dominant strategy over EBRT. Model outputs were robust to one-way and probabilistic sensitivity analyses. TARGIT-IORT is a dominant strategy over EBRT, being less costly and producing higher QALY gain. ISRCTN34086741; post results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Liu, Zengkai; Liu, Yonghong; Cai, Baoping
2014-01-01
Reliability analysis of the electrical control system of a subsea blowout preventer (BOP) stack is carried out based on Markov method. For the subsea BOP electrical control system used in the current work, the 3-2-1-0 and 3-2-0 input voting schemes are available. The effects of the voting schemes on system performance are evaluated based on Markov models. In addition, the effects of failure rates of the modules and repair time on system reliability indices are also investigated. PMID:25409010
Using Markov Chain Analyses in Counselor Education Research
ERIC Educational Resources Information Center
Duys, David K.; Headrick, Todd C.
2004-01-01
This study examined the efficacy of an infrequently used statistical analysis in counselor education research. A Markov chain analysis was used to examine hypothesized differences between students' use of counseling skills in an introductory course. Thirty graduate students participated in the study. Independent raters identified the microskills…
Bettenbühl, Mario; Rusconi, Marco; Engbert, Ralf; Holschneider, Matthias
2012-01-01
Complex biological dynamics often generate sequences of discrete events which can be described as a Markov process. The order of the underlying Markovian stochastic process is fundamental for characterizing statistical dependencies within sequences. As an example for this class of biological systems, we investigate the Markov order of sequences of microsaccadic eye movements from human observers. We calculate the integrated likelihood of a given sequence for various orders of the Markov process and use this in a Bayesian framework for statistical inference on the Markov order. Our analysis shows that data from most participants are best explained by a first-order Markov process. This is compatible with recent findings of a statistical coupling of subsequent microsaccade orientations. Our method might prove to be useful for a broad class of biological systems.
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
ERIC Educational Resources Information Center
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
Assessing the Value of Frost Forecasts to Orchardists: A Dynamic Decision-Making Approach.
NASA Astrophysics Data System (ADS)
Katz, Richard W.; Murphy, Allan H.; Winkler, Robert L.
1982-04-01
The methodology of decision analysis is used to investigate the economic value of frost (i.e., minimum temperature) forecasts to orchardists. First, the fruit-frost situation and previous studies of the value of minimum temperature forecasts in this context are described. Then, after a brief overview of decision analysis, a decision-making model for the fruit-frost problem is presented. The model involves identifying the relevant actions and events (or outcomes), specifying the effect of taking protective action, and describing the relationships among temperature, bud loss, and yield loss. A bivariate normal distribution is used to model the relationship between forecast and observed temperatures, thereby characterizing the quality of different types of information. Since the orchardist wants to minimize expenses (or maximize payoffs) over the entire frost-protection season and since current actions and outcomes at any point in the season are related to both previous and future actions and outcomes, the decision-making problem is inherently dynamic in nature. As a result, a class of dynamic models known as Markov decision processes is considered. A computational technique called dynamic programming is used in conjunction with these models to determine the optimal actions and to estimate the value of meteorological information.Some results concerning the value of frost forecasts to orchardists in the Yakima Valley of central Washington are presented for the cases of red delicious apples, bartlett pears, and elberta peaches. Estimates of the parameter values in the Markov decision process are obtained from relevant physical and economic data. Twenty years of National Weather Service forecast and observed temperatures for the Yakima key station are used to estimate the quality of different types of information, including perfect forecasts, current forecasts, and climatological information. The orchardist's optimal actions over the frost-protection season and the expected expenses associated with the use of such information are determined using a dynamic programming algorithm. The value of meteorological information is defined as the difference between the expected expense for the information of interest and the expected expense for climatological information. Over the entire frost-protection season, the value estimates (in 1977 dollars) for current forecasts were $808 per acre for red delicious apples, $492 per acre for bartlett pears, and $270 per acre for elberta peaches. These amounts account for 66, 63, and 47%, respectively, of the economic value associated with decisions based on perfect forecasts. Varying the quality of the minimum temperature forecasts reveals that the relationship between the accuracy and value of such forecasts is nonlinear and that improvements in current forecasts would not be as significant in terms of economic value as were comparable improvements in the past.Several possible extensions of this study of the value of frost forecasts to orchardists are briefly described. Finally, the application of the dynamic model formulated in this paper to other decision-making problems involving the use of meteorological information is mentioned.
NASA Astrophysics Data System (ADS)
Dittes, Beatrice; Špačková, Olga; Straub, Daniel
2017-04-01
Flood protection is often designed to safeguard people and property following regulations and standards, which specify a target design flood protection level, such as the 100-year flood level prescribed in Germany (DWA, 2011). In practice, the magnitude of such an event is only known within a range of uncertainty, which is caused by limited historic records and uncertain climate change impacts, among other factors (Hall & Solomatine, 2008). As more observations and improved climate projections become available in the future, the design flood estimate changes and the capacity of the flood protection may be deemed insufficient at a future point in time. This problem can be mitigated by the implementation of flexible flood protection systems (that can easily be adjusted in the future) and/or by adding an additional reserve to the flood protection, i.e. by applying a safety factor to the design. But how high should such a safety factor be? And how much should the decision maker be willing to pay to make the system flexible, i.e. what is the Value of Flexibility (Špačková & Straub, 2017)? We propose a decision model that identifies cost-optimal decisions on flood protection capacity in the face of uncertainty (Dittes et al. 2017). It considers sequential adjustments of the protection system during its lifetime, taking into account its flexibility. The proposed framework is based on pre-posterior Bayesian decision analysis, using Decision Trees and Markov Decision Processes, and is fully quantitative. It can include a wide range of uncertainty components such as uncertainty associated with limited historic record or uncertain climate or socio-economic change. It is shown that since flexible systems are less costly to adjust when flood estimates are changing, they justify initially lower safety factors. Investigation on the Value of Flexibility (VoF) demonstrates that VoF depends on the type and degree of uncertainty, on the learning effect (i.e. kind and quality of information that we will gather in the future) and on the formulation of the optimization problem (risk-based vs. rule-based approach). The application of the framework is demonstrated on catchments in Germany. References: DWA (Deutsche Vereinigung für Wasserwirtschaft Abwasser und Abfall eV.) 2011. Merkblatt DWA-M 507-1: Deiche an Fließgewässern. (A. Bieberstein, Ed.). Hennef: DWA Deutsche Vereinigung für Wasserwirtschaft, Abwasser und Abfall e. V. Hall, J., & Solomatine, D. 2008. A framework for uncertainty analysis in flood risk management decisions. International Journal of River Basin Management, 6(2), 85-98. http://doi.org/10.1080/15715124.2008.9635339 Špačková, O. & Straub, D. 2017. Long-term adaption decisions via fully and partially observable Markov decision processes. Sustainable and Resilient Infrastructure. In print.
Markov State Models of gene regulatory networks.
Chu, Brian K; Tse, Margaret J; Sato, Royce R; Read, Elizabeth L
2017-02-06
Gene regulatory networks with dynamics characterized by multiple stable states underlie cell fate-decisions. Quantitative models that can link molecular-level knowledge of gene regulation to a global understanding of network dynamics have the potential to guide cell-reprogramming strategies. Networks are often modeled by the stochastic Chemical Master Equation, but methods for systematic identification of key properties of the global dynamics are currently lacking. The method identifies the number, phenotypes, and lifetimes of long-lived states for a set of common gene regulatory network models. Application of transition path theory to the constructed Markov State Model decomposes global dynamics into a set of dominant transition paths and associated relative probabilities for stochastic state-switching. In this proof-of-concept study, we found that the Markov State Model provides a general framework for analyzing and visualizing stochastic multistability and state-transitions in gene networks. Our results suggest that this framework-adopted from the field of atomistic Molecular Dynamics-can be a useful tool for quantitative Systems Biology at the network scale.
An optimal repartitioning decision policy
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Reynolds, P. F., Jr.
1986-01-01
A central problem to parallel processing is the determination of an effective partitioning of workload to processors. The effectiveness of any given partition is dependent on the stochastic nature of the workload. The problem of determining when and if the stochastic behavior of the workload has changed enough to warrant the calculation of a new partition is treated. The problem is modeled as a Markov decision process, and an optimal decision policy is derived. Quantification of this policy is usually intractable. A heuristic policy which performs nearly optimally is investigated empirically. The results suggest that the detection of change is the predominant issue in this problem.
Forecasting client transitions in British Columbia's Long-Term Care Program.
Lane, D; Uyeno, D; Stark, A; Gutman, G; McCashin, B
1987-01-01
This article presents a model for the annual transitions of clients through various home and facility placements in a long-term care program. The model, an application of Markov chain analysis, is developed, tested, and applied to over 9,000 clients (N = 9,483) in British Columbia's Long Term Care Program (LTC) over the period 1978-1983. Results show that the model gives accurate forecasts of the progress of groups of clients from state to state in the long-term care system from time of admission until eventual death. Statistical methods are used to test the modeling hypothesis that clients' year-over-year transitions occur in constant proportions from state to state within the long-term care system. Tests are carried out by examining actual year-over-year transitions of each year's new admission cohort (1978-1983). Various subsets of the available data are analyzed and, after accounting for clear differences among annual cohorts, the most acceptable model of the actual client transition data occurred when clients were separated into male and female groups, i.e., the transition behavior of each group is describable by a different Markov model. To validate the model, we develop model estimates for the numbers of existing clients in each state of the long-term care system for the period (1981-1983) for which actual data are available. When these estimates are compared with the actual data, total weighted absolute deviations do not exceed 10 percent of actuals. Finally, we use the properties of the Markov chain probability transition matrix and simulation methods to develop three-year forecasts with prediction intervals for the distribution of the existing total clients into each state of the system. The tests, forecasts, and Markov model supplemental information are contained in a mechanized procedure suitable for a microcomputer. The procedure provides a powerful, efficient tool for decision makers planning facilities and services in response to the needs of long-term care clients. PMID:3121537
Automated Guidance from Physiological Sensing to Reduce Thermal-Work Strain Levels on a Novel Task
USDA-ARS?s Scientific Manuscript database
This experiment demonstrated that automated pace guidance generated from real-time physiological monitoring allowed least stressful completion of a timed (60 minute limit) 5 mile treadmill exercise. An optimal pacing policy was estimated from a Markov decision process that balanced the goals of the...
Unsupervised MDP Value Selection for Automating ITS Capabilities
ERIC Educational Resources Information Center
Stamper, John; Barnes, Tiffany
2009-01-01
We seek to simplify the creation of intelligent tutors by using student data acquired from standard computer aided instruction (CAI) in conjunction with educational data mining methods to automatically generate adaptive hints. In our previous work, we have automatically generated hints for logic tutoring by constructing a Markov Decision Process…
Lee, Kyung-Eun; Park, Hyun-Seok
2015-01-01
Epigenetic computational analyses based on Markov chains can integrate dependencies between regions in the genome that are directly adjacent. In this paper, the BED files of fifteen chromatin states of the Broad Histone Track of the ENCODE project are parsed, and comparative nucleotide frequencies of regional chromatin blocks are thoroughly analyzed to detect the Markov property in them. We perform various tests to examine the Markov property embedded in a frequency domain by checking for the presence of the Markov property in the various chromatin states. We apply these tests to each region of the fifteen chromatin states. The results of our simulation indicate that some of the chromatin states possess a stronger Markov property than others. We discuss the significance of our findings in statistical models of nucleotide sequences that are necessary for the computational analysis of functional units in noncoding DNA.
A quantum probability explanation for violations of ‘rational’ decision theory
Pothos, Emmanuel M.; Busemeyer, Jerome R.
2009-01-01
Two experimental tasks in psychology, the two-stage gambling game and the Prisoner's Dilemma game, show that people violate the sure thing principle of decision theory. These paradoxical findings have resisted explanation by classical decision theory for over a decade. A quantum probability model, based on a Hilbert space representation and Schrödinger's equation, provides a simple and elegant explanation for this behaviour. The quantum model is compared with an equivalent Markov model and it is shown that the latter is unable to account for violations of the sure thing principle. Accordingly, it is argued that quantum probability provides a better framework for modelling human decision-making. PMID:19324743
A model of interaction between anticorruption authority and corruption groups
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neverova, Elena G.; Malafeyef, Oleg A.
The paper provides a model of interaction between anticorruption unit and corruption groups. The main policy functions of the anticorruption unit involve reducing corrupt practices in some entities through an optimal approach to resource allocation and effective anticorruption policy. We develop a model based on Markov decision-making process and use Howard’s policy-improvement algorithm for solving an optimal decision strategy. We examine the assumption that corruption groups retaliate against the anticorruption authority to protect themselves. This model was implemented through stochastic game.
Optimal inventories for overhaul of repairable redundant systems - A Markov decision model
NASA Technical Reports Server (NTRS)
Schaefer, M. K.
1984-01-01
A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.
Optimal dynamic control of resources in a distributed system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Krishna, C. M.; Lee, Yann-Hang
1989-01-01
The authors quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function and derive optimal control strategies using Markov decision theory. The control variables treated are quite general; they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of the approach is provided.
Karmarkar, Taruja D; Maurer, Anne; Parks, Michael L; Mason, Thomas; Bejinez-Eastman, Ana; Harrington, Melvyn; Morgan, Randall; O'Connor, Mary I; Wood, James E; Gaskin, Darrell J
2017-12-01
Disparities in the presentation of knee osteoarthritis (OA) and in the utilization of treatment across sex, racial, and ethnic groups in the United States are well documented. We used a Markov model to calculate lifetime costs of knee OA treatment. We then used the model results to compute costs of disparities in treatment by race, ethnicity, sex, and socioeconomic status. We used the literature to construct a Markov Model of knee OA and publicly available data to create the model parameters and patient populations of interest. An expert panel of physicians, who treated a large number of patients with knee OA, constructed treatment pathways. Direct costs were based on the literature and indirect costs were derived from the Medical Expenditure Panel Survey. We found that failing to obtain effective treatment increased costs and limited benefits for all groups. Delaying treatment imposed a greater cost across all groups and decreased benefits. Lost income because of lower labor market productivity comprised a substantial proportion of the lifetime costs of knee OA. Population simulations demonstrated that as the diversity of the US population increases, the societal costs of racial and ethnic disparities in treatment utilization for knee OA will increase. Our results show that disparities in treatment of knee OA are costly. All stakeholders involved in treatment decisions for knee OA patients should consider costs associated with delaying and forgoing treatment, especially for disadvantaged populations. Such decisions may lead to higher costs and worse health outcomes.
Markov models of genome segmentation
NASA Astrophysics Data System (ADS)
Thakur, Vivek; Azad, Rajeev K.; Ramaswamy, Ram
2007-01-01
We introduce Markov models for segmentation of symbolic sequences, extending a segmentation procedure based on the Jensen-Shannon divergence that has been introduced earlier. Higher-order Markov models are more sensitive to the details of local patterns and in application to genome analysis, this makes it possible to segment a sequence at positions that are biologically meaningful. We show the advantage of higher-order Markov-model-based segmentation procedures in detecting compositional inhomogeneity in chimeric DNA sequences constructed from genomes of diverse species, and in application to the E. coli K12 genome, boundaries of genomic islands, cryptic prophages, and horizontally acquired regions are accurately identified.
Jones, Edmund; Epstein, David; García-Mochón, Leticia
2017-10-01
For health-economic analyses that use multistate Markov models, it is often necessary to convert from transition rates to transition probabilities, and for probabilistic sensitivity analysis and other purposes it is useful to have explicit algebraic formulas for these conversions, to avoid having to resort to numerical methods. However, if there are four or more states then the formulas can be extremely complicated. These calculations can be made using packages such as R, but many analysts and other stakeholders still prefer to use spreadsheets for these decision models. We describe a procedure for deriving formulas that use intermediate variables so that each individual formula is reasonably simple. Once the formulas have been derived, the calculations can be performed in Excel or similar software. The procedure is illustrated by several examples and we discuss how to use a computer algebra system to assist with it. The procedure works in a wide variety of scenarios but cannot be employed when there are several backward transitions and the characteristic equation has no algebraic solution, or when the eigenvalues of the transition rate matrix are very close to each other.
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains
Meyer, Denny; Forbes, Don; Clarke, Stephen R.
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key Points A comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition. The Markov assumption appears to be valid. However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play. Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes. PMID:24357946
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains.
Meyer, Denny; Forbes, Don; Clarke, Stephen R
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key PointsA comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition.The Markov assumption appears to be valid.However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play.Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes.
Free energies from dynamic weighted histogram analysis using unbiased Markov state model.
Rosta, Edina; Hummer, Gerhard
2015-01-13
The weighted histogram analysis method (WHAM) is widely used to obtain accurate free energies from biased molecular simulations. However, WHAM free energies can exhibit significant errors if some of the biasing windows are not fully equilibrated. To account for the lack of full equilibration, we develop the dynamic histogram analysis method (DHAM). DHAM uses a global Markov state model to obtain the free energy along the reaction coordinate. A maximum likelihood estimate of the Markov transition matrix is constructed by joint unbiasing of the transition counts from multiple umbrella-sampling simulations along discretized reaction coordinates. The free energy profile is the stationary distribution of the resulting Markov matrix. For this matrix, we derive an explicit approximation that does not require the usual iterative solution of WHAM. We apply DHAM to model systems, a chemical reaction in water treated using quantum-mechanics/molecular-mechanics (QM/MM) simulations, and the Na(+) ion passage through the membrane-embedded ion channel GLIC. We find that DHAM gives accurate free energies even in cases where WHAM fails. In addition, DHAM provides kinetic information, which we here use to assess the extent of convergence in each of the simulation windows. DHAM may also prove useful in the construction of Markov state models from biased simulations in phase-space regions with otherwise low population.
Hoomans, Ties; Abrams, Keith R; Ament, Andre J H A; Evers, Silvia M A A; Severens, Johan L
2009-10-01
Decision making about resource allocation for guideline implementation to change clinical practice is inevitably undertaken in a context of uncertainty surrounding the cost-effectiveness of both clinical guidelines and implementation strategies. Adopting a total net benefit approach, a model was recently developed to overcome problems with the use of combined ratio statistics when analyzing decision uncertainty. To demonstrate the stochastic application of the model for informing decision making about the adoption of an audit and feedback strategy for implementing a guideline recommending intensive blood glucose control in type 2 diabetes in primary care in the Netherlands. An integrated Bayesian approach to decision modeling and evidence synthesis is adopted, using Markov Chain Monte Carlo simulation in WinBUGs. Data on model parameters is gathered from various sources, with effectiveness of implementation being estimated using pooled, random-effects meta-analysis. Decision uncertainty is illustrated using cost-effectiveness acceptability curves and frontier. Decisions about whether to adopt intensified glycemic control and whether to adopt audit and feedback alter for the maximum values that decision makers are willing to pay for health gain. Through simultaneously incorporating uncertain economic evidence on both guidance and implementation strategy, the cost-effectiveness acceptability curves and cost-effectiveness acceptability frontier show an increase in decision uncertainty concerning guideline implementation. The stochastic application in diabetes care demonstrates that the model provides a simple and useful tool for quantifying and exploring the (combined) uncertainty associated with decision making about adopting guidelines and implementation strategies and, therefore, for informing decisions about efficient resource allocation to change clinical practice.
Gedik, Ridvan; Zhang, Shengfan; Rainwater, Chase
2017-06-01
A relatively new consideration in proton therapy planning is the requirement that the mix of patients treated from different categories satisfy desired mix percentages. Deviations from these percentages and their impacts on operational capabilities are of particular interest to healthcare planners. In this study, we investigate intelligent ways of admitting patients to a proton therapy facility that maximize the total expected number of treatment sessions (fractions) delivered to patients in a planning period with stochastic patient arrivals and penalize the deviation from the patient mix restrictions. We propose a Markov Decision Process (MDP) model that provides very useful insights in determining the best patient admission policies in the case of an unexpected opening in the facility (i.e., no-shows, appointment cancellations, etc.). In order to overcome the curse of dimensionality for larger and more realistic instances, we propose an aggregate MDP model that is able to approximate optimal patient admission policies using the worded weight aggregation technique. Our models are applicable to healthcare treatment facilities throughout the United States, but are motivated by collaboration with the University of Florida Proton Therapy Institute (UFPTI).
Tracking Problem Solving by Multivariate Pattern Analysis and Hidden Markov Model Algorithms
ERIC Educational Resources Information Center
Anderson, John R.
2012-01-01
Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application…
Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms
Rechner, Steffen; Berger, Annabell
2016-01-01
We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442
Emotion and decision-making: affect-driven belief systems in anxiety and depression.
Paulus, Martin P; Yu, Angela J
2012-09-01
Emotion processing and decision-making are integral aspects of daily life. However, our understanding of the interaction between these constructs is limited. In this review, we summarize theoretical approaches that link emotion and decision-making, and focus on research with anxious or depressed individuals to show how emotions can interfere with decision-making. We integrate the emotional framework based on valence and arousal with a Bayesian approach to decision-making in terms of probability and value processing. We discuss how studies of individuals with emotional dysfunctions provide evidence that alterations of decision-making can be viewed in terms of altered probability and value computation. We argue that the probabilistic representation of belief states in the context of partially observable Markov decision processes provides a useful approach to examine alterations in probability and value representation in individuals with anxiety and depression, and outline the broader implications of this approach. Copyright © 2012. Published by Elsevier Ltd.
Emotion and decision-making: affect-driven belief systems in anxiety and depression
Paulus, Martin P.; Yu, Angela J.
2012-01-01
Emotion processing and decision-making are integral aspects of daily life. However, our understanding of the interaction between these constructs is limited. In this review, we summarize theoretical approaches to the link between emotion and decision-making, and focus on research with anxious or depressed individuals that reveals how emotions can interfere with decision-making. We integrate the emotional framework based on valence and arousal with a Bayesian approach to decision-making in terms of probability and value processing. We then discuss how studies of individuals with emotional dysfunctions provide evidence that alterations of decision-making can be viewed in terms of altered probability and value computation. We argue that the probabilistic representation of belief states in the context of partially observable Markov decision processes provides a useful approach to examine alterations in probability and value representation in individuals with anxiety and depression and outline the broader implications of this approach. PMID:22898207
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
Protocol and practice in the adaptive management of waterfowl harvests
Johnson, F.; Williams, K.
1999-01-01
Waterfowl harvest management in North America, for all its success, historically has had several shortcomings, including a lack of well-defined objectives, a failure to account for uncertain management outcomes, and inefficient use of harvest regulations to understand the effects of management. To address these and other concerns, the U.S. Fish and Wildlife Service began implementation of adaptive harvest management in 1995. Harvest policies are now developed using a Markov decision process in which there is an explicit accounting for uncontrolled environmental variation, partial controllability of harvest, and structural uncertainty in waterfowl population dynamics. Current policies are passively adaptive, in the sense that any reduction in structural uncertainty is an unplanned by-product of the regulatory process. A generalization of the Markov decision process permits the calculation of optimal actively adaptive policies, but it is not yet clear how state-specific harvest actions differ between passive and active approaches. The Markov decision process also provides managers the ability to explore optimal levels of aggregation or "management scale" for regulating harvests in a system that exhibits high temporal, spatial, and organizational variability. Progress in institutionalizing adaptive harvest management has been remarkable, but some managers still perceive the process as a panacea, while failing to appreciate the challenges presented by this more explicit and methodical approach to harvest regulation. Technical hurdles include the need to develop better linkages between population processes and the dynamics of landscapes, and to model the dynamics of structural uncertainty in a more comprehensive fashion. From an institutional perspective, agreement on how to value and allocate harvests continues to be elusive, and there is some evidence that waterfowl managers have overestimated the importance of achievement-oriented factors in setting hunting regulations. Indeed, it is these unresolved value judgements, and the lack of an effective structure for organizing debate, that present the greatest threat to adaptive harvest management as a viable means for coping with management uncertainty. Copyright ?? 1999 by The Resilience Alliance.
Sebastian, Tunny; Jeyaseelan, Visalakshi; Jeyaseelan, Lakshmanan; Anandan, Shalini; George, Sebastian; Bangdiwala, Shrikant I
2018-01-01
Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as 'Low', 'Moderate' and 'High' with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.
NASA Astrophysics Data System (ADS)
Turner, Sean; Galelli, Stefano; Wilcox, Karen
2015-04-01
Water reservoir systems are often affected by recurring large-scale ocean-atmospheric anomalies, known as teleconnections, that cause prolonged periods of climatological drought. Accurate forecasts of these events -- at lead times in the order of weeks and months -- may enable reservoir operators to take more effective release decisions to improve the performance of their systems. In practice this might mean a more reliable water supply system, a more profitable hydropower plant or a more sustainable environmental release policy. To this end, climate indices, which represent the oscillation of the ocean-atmospheric system, might be gainfully employed within reservoir operating models that adapt the reservoir operation as a function of the climate condition. This study develops a Stochastic Dynamic Programming (SDP) approach that can incorporate climate indices using a Hidden Markov Model. The model simulates the climatic regime as a hidden state following a Markov chain, with the state transitions driven by variation in climatic indices, such as the Southern Oscillation Index. Time series analysis of recorded streamflow data reveals the parameters of separate autoregressive models that describe the inflow to the reservoir under three representative climate states ("normal", "wet", "dry"). These models then define inflow transition probabilities for use in a classic SDP approach. The key advantage of the Hidden Markov Model is that it allows conditioning the operating policy not only on the reservoir storage and the antecedent inflow, but also on the climate condition, thus potentially allowing adaptability to a broader range of climate conditions. In practice, the reservoir operator would effect a water release tailored to a specific climate state based on available teleconnection data and forecasts. The approach is demonstrated on the operation of a realistic, stylised water reservoir with carry-over capacity in South-East Australia. Here teleconnections relating to both the El Niño Southern Oscillation and the Indian Ocean Dipole influence local hydro-meteorological processes; statistically significant lag correlations have already been established. Simulation of the derived operating policies, which are benchmarked against standard policies conditioned on the reservoir storage and the antecedent inflow, demonstrates the potential of the proposed approach. Future research will further develop the model for sensitivity analysis and regional studies examining the economic value of incorporating long range forecasts into reservoir operation.
van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F
2013-08-01
Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.
Cost-Utility Analysis of Bariatric Surgery in Italy: Results of Decision-Analytic Modelling
Lucchese, Marcello; Borisenko, Oleg; Mantovani, Lorenzo Giovanni; Cortesi, Paolo Angelo; Cesana, Giancarlo; Adam, Daniel; Burdukova, Elisabeth; Lukyanov, Vasily; Di Lorenzo, Nicola
2017-01-01
Objective To evaluate the cost-effectiveness of bariatric surgery in Italy from a third-party payer perspective over a medium-term (10 years) and a long-term (lifetime) horizon. Methods A state-transition Markov model was developed, in which patients may experience surgery, post-surgery complications, diabetes mellitus type 2, cardiovascular diseases or die. Transition probabilities, costs, and utilities were obtained from the Italian and international literature. Three types of surgeries were considered: gastric bypass, sleeve gastrectomy, and adjustable gastric banding. A base-case analysis was performed for the population, the characteristics of which were obtained from surgery candidates in Italy. Results In the base-case analysis, over 10 years, bariatric surgery led to cost increment of EUR 2,661 and generated additional 1.1 quality-adjusted life years (QALYs). Over a lifetime, surgery led to savings of EUR 8,649, additional 0.5 life years and 3.2 QALYs. Bariatric surgery was cost-effective at 10 years with an incremental cost-effectiveness ratio of EUR 2,412/QALY and dominant over conservative management over a lifetime. Conclusion In a comprehensive decision analytic model, a current mix of surgical methods for bariatric surgery was cost-effective at 10 years and cost-saving over the lifetime of the Italian patient cohort considered in this analysis. PMID:28601866
Kennedy, Joshua L; Robinson, Derek; Christophel, Jared; Borish, Larry; Payne, Spencer
2014-01-01
The purpose of the study was to determine the age at which initiation of specific subcutaneous immunotherapy (SCIT) becomes more cost-effective than continued lifetime intranasal steroid (NS) therapy in the treatment of allergic rhinitis, with the use of a decision analysis model. A Markov decision analysis model was created for this study. Economic analyses were performed to identify "break-even" points in the treatment of allergic rhinitis with the use of SCIT and NS. Efficacy rates for therapy and cost data were collected from the published literature. Models in which there was only incomplete improvement while receiving SCIT were also evaluated for economic break-even points. The primary perspective of the study was societal. Multiple break-even point curves were obtained corresponding to various clinical scenarios. For patients with seasonal allergic rhinitis requiring NS (i.e., fluticasone) 6 months per year, the age at which initiation of SCIT provides long-term direct cost advantage is less than 41 years. For patients with perennial rhinitis symptoms requiring year-round NS, the cut-off age for SCIT cost-effectiveness increases to 60 years. Hypothetical subjects who require continued NS treatment (50% reduction of previous dosage) while receiving SCIT also display break-even points, whereby it is economically advantageous to consider allergy referral and SCIT, dependent on the cost of the NS prescribed. The age at which SCIT provides economic advantages over NS in the treatment of allergic rhinitis depends on multiple clinical factors. Decision analysis models can assist the physician in accounting for these factors and customize patient counseling with regard to treatment options.
Markov Chain Ontology Analysis (MCOA)
2012-01-01
Background Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. Results In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. Conclusion A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches. PMID:22300537
Markov Chain Ontology Analysis (MCOA).
Frost, H Robert; McCray, Alexa T
2012-02-03
Biomedical ontologies have become an increasingly critical lens through which researchers analyze the genomic, clinical and bibliographic data that fuels scientific research. Of particular relevance are methods, such as enrichment analysis, that quantify the importance of ontology classes relative to a collection of domain data. Current analytical techniques, however, remain limited in their ability to handle many important types of structural complexity encountered in real biological systems including class overlaps, continuously valued data, inter-instance relationships, non-hierarchical relationships between classes, semantic distance and sparse data. In this paper, we describe a methodology called Markov Chain Ontology Analysis (MCOA) and illustrate its use through a MCOA-based enrichment analysis application based on a generative model of gene activation. MCOA models the classes in an ontology, the instances from an associated dataset and all directional inter-class, class-to-instance and inter-instance relationships as a single finite ergodic Markov chain. The adjusted transition probability matrix for this Markov chain enables the calculation of eigenvector values that quantify the importance of each ontology class relative to other classes and the associated data set members. On both controlled Gene Ontology (GO) data sets created with Escherichia coli, Drosophila melanogaster and Homo sapiens annotations and real gene expression data extracted from the Gene Expression Omnibus (GEO), the MCOA enrichment analysis approach provides the best performance of comparable state-of-the-art methods. A methodology based on Markov chain models and network analytic metrics can help detect the relevant signal within large, highly interdependent and noisy data sets and, for applications such as enrichment analysis, has been shown to generate superior performance on both real and simulated data relative to existing state-of-the-art approaches.
Bennett, Casey C; Hauser, Kris
2013-01-01
In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal decisions even in complex and uncertain environments. Future work is described that outlines potential lines of research and integration of machine learning algorithms for personalized medicine. Copyright © 2012 Elsevier B.V. All rights reserved.
Quantitative risk stratification in Markov chains with limiting conditional distributions.
Chan, David C; Pollett, Philip K; Weinstein, Milton C
2009-01-01
Many clinical decisions require patient risk stratification. The authors introduce the concept of limiting conditional distributions, which describe the equilibrium proportion of surviving patients occupying each disease state in a Markov chain with death. Such distributions can quantitatively describe risk stratification. The authors first establish conditions for the existence of a positive limiting conditional distribution in a general Markov chain and describe a framework for risk stratification using the limiting conditional distribution. They then apply their framework to a clinical example of a treatment indicated for high-risk patients, first to infer the risk of patients selected for treatment in clinical trials and then to predict the outcomes of expanding treatment to other populations of risk. For the general chain, a positive limiting conditional distribution exists only if patients in the earliest state have the lowest combined risk of progression or death. The authors show that in their general framework, outcomes and population risk are interchangeable. For the clinical example, they estimate that previous clinical trials have selected the upper quintile of patient risk for this treatment, but they also show that expanded treatment would weakly dominate this degree of targeted treatment, and universal treatment may be cost-effective. Limiting conditional distributions exist in most Markov models of progressive diseases and are well suited to represent risk stratification quantitatively. This framework can characterize patient risk in clinical trials and predict outcomes for other populations of risk.
Music and Video Gaming during Breaks: Influence on Habitual versus Goal-Directed Decision Making.
Liu, Shuyan; Schad, Daniel J; Kuschpel, Maxim S; Rapp, Michael A; Heinz, Andreas
2016-01-01
Different systems for habitual versus goal-directed control are thought to underlie human decision-making. Working memory is known to shape these decision-making systems and their interplay, and is known to support goal-directed decision making even under stress. Here, we investigated if and how decision systems are differentially influenced by breaks filled with diverse everyday life activities known to modulate working memory performance. We used a within-subject design where young adults listened to music and played a video game during breaks interleaved with trials of a sequential two-step Markov decision task, designed to assess habitual as well as goal-directed decision making. Based on a neurocomputational model of task performance, we observed that for individuals with a rather limited working memory capacity video gaming as compared to music reduced reliance on the goal-directed decision-making system, while a rather large working memory capacity prevented such a decline. Our findings suggest differential effects of everyday activities on key decision-making processes.
Music and Video Gaming during Breaks: Influence on Habitual versus Goal-Directed Decision Making
Kuschpel, Maxim S.; Rapp, Michael A.; Heinz, Andreas
2016-01-01
Different systems for habitual versus goal-directed control are thought to underlie human decision-making. Working memory is known to shape these decision-making systems and their interplay, and is known to support goal-directed decision making even under stress. Here, we investigated if and how decision systems are differentially influenced by breaks filled with diverse everyday life activities known to modulate working memory performance. We used a within-subject design where young adults listened to music and played a video game during breaks interleaved with trials of a sequential two-step Markov decision task, designed to assess habitual as well as goal-directed decision making. Based on a neurocomputational model of task performance, we observed that for individuals with a rather limited working memory capacity video gaming as compared to music reduced reliance on the goal-directed decision-making system, while a rather large working memory capacity prevented such a decline. Our findings suggest differential effects of everyday activities on key decision-making processes. PMID:26982326
Markov chain model for demersal fish catch analysis in Indonesia
NASA Astrophysics Data System (ADS)
Firdaniza; Gusriani, N.
2018-03-01
As an archipelagic country, Indonesia has considerable potential fishery resources. One of the fish resources that has high economic value is demersal fish. Demersal fish is a fish with a habitat in the muddy seabed. Demersal fish scattered throughout the Indonesian seas. Demersal fish production in each Indonesia’s Fisheries Management Area (FMA) varies each year. In this paper we have discussed the Markov chain model for demersal fish yield analysis throughout all Indonesia’s Fisheries Management Area. Data of demersal fish catch in every FMA in 2005-2014 was obtained from Directorate of Capture Fisheries. From this data a transition probability matrix is determined by the number of transitions from the catch that lie below the median or above the median. The Markov chain model of demersal fish catch data was an ergodic Markov chain model, so that the limiting probability of the Markov chain model can be determined. The predictive value of demersal fishing yields was obtained by calculating the combination of limiting probability with average catch results below the median and above the median. The results showed that for 2018 and long-term demersal fishing results in most of FMA were below the median value.
ERIC Educational Resources Information Center
Almond, Russell G.
2007-01-01
Over the course of instruction, instructors generally collect a great deal of information about each student. Integrating that information intelligently requires models for how a student's proficiency changes over time. Armed with such models, instructors can "filter" the data--more accurately estimate the student's current proficiency…
Integrated Thermal Response Modeling System For Hypersonic Entry Vehicles
NASA Technical Reports Server (NTRS)
Chen, Y.-K.; Milos, F. S.; Partridge, Harry (Technical Monitor)
2000-01-01
We describe all extension of the Markov decision process model in which a continuous time dimension is included ill the state space. This allows for the representation and exact solution of a wide range of problems in which transitions or rewards vary over time. We examine problems based on route planning with public transportation and telescope observation scheduling.
Exact Solutions to Time-dependent Mdps
NASA Technical Reports Server (NTRS)
Boyan, Justin A.; Littman, Michael L.
2000-01-01
We describe an extension of the Markov decision process model in which a continuous time dimension is included in the state space. This allows for the representation and exact solution of a wide range of problems in which transitions or rewards vary over time. We examine problems based on route planning with public transportation and telescope observation scheduling.
Testing the Efficiency of Markov Chain Monte Carlo with People Using Facial Affect Categories
ERIC Educational Resources Information Center
Martin, Jay B.; Griffiths, Thomas L.; Sanborn, Adam N.
2012-01-01
Exploring how people represent natural categories is a key step toward developing a better understanding of how people learn, form memories, and make decisions. Much research on categorization has focused on artificial categories that are created in the laboratory, since studying natural categories defined on high-dimensional stimuli such as…
[Decision modeling for economic evaluation of health technologies].
de Soárez, Patrícia Coelho; Soares, Marta Oliveira; Novaes, Hillegonda Maria Dutilh
2014-10-01
Most economic evaluations that participate in decision-making processes for incorporation and financing of technologies of health systems use decision models to assess the costs and benefits of the compared strategies. Despite the large number of economic evaluations conducted in Brazil, there is a pressing need to conduct an in-depth methodological study of the types of decision models and their applicability in our setting. The objective of this literature review is to contribute to the knowledge and use of decision models in the national context of economic evaluations of health technologies. This article presents general definitions about models and concerns with their use; it describes the main models: decision trees, Markov chains, micro-simulation, simulation of discrete and dynamic events; it discusses the elements involved in the choice of model; and exemplifies the models addressed in national economic evaluation studies of diagnostic and therapeutic preventive technologies and health programs.
Xu, Xin; Huang, Zhenhua; Graves, Daniel; Pedrycz, Witold
2014-12-01
In order to deal with the sequential decision problems with large or continuous state spaces, feature representation and function approximation have been a major research topic in reinforcement learning (RL). In this paper, a clustering-based graph Laplacian framework is presented for feature representation and value function approximation (VFA) in RL. By making use of clustering-based techniques, that is, K-means clustering or fuzzy C-means clustering, a graph Laplacian is constructed by subsampling in Markov decision processes (MDPs) with continuous state spaces. The basis functions for VFA can be automatically generated from spectral analysis of the graph Laplacian. The clustering-based graph Laplacian is integrated with a class of approximation policy iteration algorithms called representation policy iteration (RPI) for RL in MDPs with continuous state spaces. Simulation and experimental results show that, compared with previous RPI methods, the proposed approach needs fewer sample points to compute an efficient set of basis functions and the learning control performance can be improved for a variety of parameter settings.
Timing of testing and treatment for asymptomatic diseases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kırkızlar, Eser; Faissol, Daniel M.; Griffin, Paul M.
2010-07-01
Many papers in the medical literature analyze the cost-effectiveness of screening for diseases by comparing a limited number of a priori testing policies under estimated problem parameters. However, this may be insufficient to determine the best timing of the tests or incorporate changes over time. In this paper, we develop and solve a Markov Decision Process (MDP) model for a simple class of asymptomatic diseases in order to provide the building blocks for analysis of a more general class of diseases. We provide a computationally efficient method for determining a cost-effective dynamic intervention strategy that takes into account (i) themore » results of the previous test for each individual and (ii) the change in the individual’s behavior based on awareness of the disease. We demonstrate the usefulness of the approach by applying the results to screening decisions for Hepatitis C (HCV) using medical data, and compare our findings to current HCV screening recommendations.« less
Decision-Making in Critical Limb Ischemia: A Markov Simulation.
Deutsch, Aaron J; Jain, C Charles; Blumenthal, Kimberly G; Dickinson, Mark W; Neilan, Anne M
2017-11-01
Critical limb ischemia (CLI) is a feared complication of peripheral vascular disease that often requires surgical management and may require amputation of the affected limb. We developed a decision model to inform clinical management for a 63-year-old woman with CLI and multiple medical comorbidities, including advanced heart failure and diabetes. We developed a Markov decision model to evaluate 4 strategies: amputation, surgical bypass, endovascular therapy (e.g. stent or revascularization), and medical management. We measured the impact of parameter uncertainty using 1-way, 2-way, and multiway sensitivity analyses. In the base case, endovascular therapy yielded similar discounted quality-adjusted life months (26.50 QALMs) compared with surgical bypass (26.34 QALMs). Both endovascular and surgical therapies were superior to amputation (18.83 QALMs) and medical management (11.08 QALMs). This finding was robust to a wide range of periprocedural mortality weights and was most sensitive to long-term mortality associated with endovascular and surgical therapies. Utility weights were not stratified by patient comorbidities; nonetheless, our conclusion was robust to a range of utility weight values. For a patient with CLI, endovascular therapy and surgical bypass provided comparable clinical outcomes. However, this finding was sensitive to long-term mortality rates associated with each procedure. Both endovascular and surgical therapies were superior to amputation or medical management in a range of scenarios. Copyright © 2017 Elsevier Inc. All rights reserved.
Refining value-at-risk estimates using a Bayesian Markov-switching GJR-GARCH copula-EVT model.
Sampid, Marius Galabe; Hasim, Haslifah M; Dai, Hongsheng
2018-01-01
In this paper, we propose a model for forecasting Value-at-Risk (VaR) using a Bayesian Markov-switching GJR-GARCH(1,1) model with skewed Student's-t innovation, copula functions and extreme value theory. A Bayesian Markov-switching GJR-GARCH(1,1) model that identifies non-constant volatility over time and allows the GARCH parameters to vary over time following a Markov process, is combined with copula functions and EVT to formulate the Bayesian Markov-switching GJR-GARCH(1,1) copula-EVT VaR model, which is then used to forecast the level of risk on financial asset returns. We further propose a new method for threshold selection in EVT analysis, which we term the hybrid method. Empirical and back-testing results show that the proposed VaR models capture VaR reasonably well in periods of calm and in periods of crisis.
Space system operations and support cost analysis using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.
1990-01-01
This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.
AN OPTIMAL MAINTENANCE MANAGEMENT MODEL FOR AIRPORT CONCRETE PAVEMENT
NASA Astrophysics Data System (ADS)
Shimomura, Taizo; Fujimori, Yuji; Kaito, Kiyoyuki; Obama, Kengo; Kobayashi, Kiyoshi
In this paper, an optimal management model is formulated for the performance-based rehabilitation/maintenance contract for airport concrete pavement, whereby two types of life cycle cost risks, i.e., ground consolidation risk and concrete depreciation risk, are explicitly considered. The non-homogenous Markov chain model is formulated to represent the deterioration processes of concrete pavement which are conditional upon the ground consolidation processes. The optimal non-homogenous Markov decision model with multiple types of risk is presented to design the optimal rehabilitation/maintenance plans. And the methodology to revise the optimal rehabilitation/maintenance plans based upon the monitoring data by the Bayesian up-to-dating rules. The validity of the methodology presented in this paper is examined based upon the case studies carried out for the H airport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m
2010-04-15
This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less
Standfield, L B; Comans, T A; Scuffham, P A
2017-01-01
To empirically compare Markov cohort modeling (MM) and discrete event simulation (DES) with and without dynamic queuing (DQ) for cost-effectiveness (CE) analysis of a novel method of health services delivery where capacity constraints predominate. A common data-set comparing usual orthopedic care (UC) to an orthopedic physiotherapy screening clinic and multidisciplinary treatment service (OPSC) was used to develop a MM and a DES without (DES-no-DQ) and with DQ (DES-DQ). Model results were then compared in detail. The MM predicted an incremental CE ratio (ICER) of $495 per additional quality-adjusted life-year (QALY) for OPSC over UC. The DES-no-DQ showed OPSC dominating UC; the DES-DQ generated an ICER of $2342 per QALY. The MM and DES-no-DQ ICER estimates differed due to the MM having implicit delays built into its structure as a result of having fixed cycle lengths, which are not a feature of DES. The non-DQ models assume that queues are at a steady state. Conversely, queues in the DES-DQ develop flexibly with supply and demand for resources, in this case, leading to different estimates of resource use and CE. The choice of MM or DES (with or without DQ) would not alter the reimbursement of OPSC as it was highly cost-effective compared to UC in all analyses. However, the modeling method may influence decisions where ICERs are closer to the CE acceptability threshold, or where capacity constraints and DQ are important features of the system. In these cases, DES-DQ would be the preferred modeling technique to avoid incorrect resource allocation decisions.
Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas
2013-11-22
Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.
Integration of gene normalization stages and co-reference resolution using a Markov logic network.
Dai, Hong-Jie; Chang, Yen-Ching; Tsai, Richard Tzong-Han; Hsu, Wen-Lian
2011-09-15
Gene normalization (GN) is the task of normalizing a textual gene mention to a unique gene database ID. Traditional top performing GN systems usually need to consider several constraints to make decisions in the normalization process, including filtering out false positives, or disambiguating an ambiguous gene mention, to improve system performance. However, these constraints are usually executed in several separate stages and cannot use each other's input/output interactively. In this article, we propose a novel approach that employs a Markov logic network (MLN) to model the constraints used in the GN task. Firstly, we show how various constraints can be formulated and combined in an MLN. Secondly, we are the first to apply the two main concepts of co-reference resolution-discourse salience in centering theory and transitivity-to GN models. Furthermore, to make our results more relevant to developers of information extraction applications, we adopt the instance-based precision/recall/F-measure (PRF) in addition to the article-wide PRF to assess system performance. Experimental results show that our system outperforms baseline and state-of-the-art systems under two evaluation schemes. Through further analysis, we have found several unexplored challenges in the GN task. hongjie@iis.sinica.edu.tw Supplementary data are available at Bioinformatics online.
Markov and non-Markov processes in complex systems by the dynamical information entropy
NASA Astrophysics Data System (ADS)
Yulmetyev, R. M.; Gafarov, F. M.
1999-12-01
We consider the Markov and non-Markov processes in complex systems by the dynamical information Shannon entropy (DISE) method. The influence and important role of the two mutually dependent channels of entropy alternation (creation or generation of correlation) and anti-correlation (destroying or annihilation of correlation) have been discussed. The developed method has been used for the analysis of the complex systems of various natures: slow neutron scattering in liquid cesium, psychology (short-time numeral and pattern human memory and effect of stress on the dynamical taping-test), random dynamics of RR-intervals in human ECG (problem of diagnosis of various disease of the human cardio-vascular systems), chaotic dynamics of the parameters of financial markets and ecological systems.
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.
Nagase, Satoshi; Iyoda, Tomokazu; Kanno, Hiroshi; Akase, Tomohide; Arakawa, Ichiro; Inoue, Tadao; Uetsuka, Yoshio
2016-10-01
Phase III clinical trials have comfirmed that the S-1 plus oxaliplatin(SOX)is inferior to the capecitabine plus oxaliplatin (COX)regimen in the treatment of metastatic colorectal cancer.On the basis of these findings, we compared, using a clinical decision analysis-based approach, the cost-effectiveness of the SOX and COX regimens.Herein, we simulated the expected effects and costs of the SOX and COX regimens using the markov model.Clinical data were obtained from Hong's 2012 report.The cost data comprised the costs for pharmacist labor, material, inspection, and treatment for adverse event, as well as the total cost of care at the advanced stage.The result showed that the expected cost of the SOX and COX regimen was 1,538,330 yen, and 1,429,596 yen, respectively, with an expected survival rate of 29.18 months, and 28.63 months, respectively.The incremental cost-effectiveness ratio of the SOX regimen was 197,698 yen/month; thus, the SOX regimen was found to be more cost-effective that the COX regimen.
ERIC Educational Resources Information Center
Towne, Douglas M.; And Others
This final report reviews research performed in two major areas--instructional theory, and development of a generalized maintenance trainer simulator. Five related research projects were carried out in the domain of instructional theory: (1) the effects of visual analogies of abstract concepts, (2) Markov decision models for instructional sequence…
Gariepy, Aileen M.; Creinin, Mitchell D.; Schwarz, Eleanor B.; Smith, Kenneth J.
2011-01-01
OBJECTIVE To estimate the probability of successful sterilization after hysteroscopic or laparoscopic sterilization procedure. METHODS An evidence-based clinical decision analysis using a Markov model was performed to estimate the probability of a successful sterilization procedure using laparoscopic sterilization, hysteroscopic sterilization in the operating room, and hysteroscopic sterilization in the office. Procedure and follow-up testing probabilities for the model were estimated from published sources. RESULTS In the base case analysis, the proportion of women having a successful sterilization procedure on first attempt is 99% for laparoscopic, 88% for hysteroscopic in the operating room and 87% for hysteroscopic in the office. The probability of having a successful sterilization procedure within one year is 99% with laparoscopic, 95% for hysteroscopic in the operating room, and 94% for hysteroscopic in the office. These estimates for hysteroscopic success include approximately 6% of women who attempt hysteroscopically but are ultimately sterilized laparoscopically. Approximately 5% of women who have a failed hysteroscopic attempt decline further sterilization attempts. CONCLUSIONS Women choosing laparoscopic sterilization are more likely than those choosing hysteroscopic sterilization to have a successful sterilization procedure within one year. However, the risk of failed sterilization and subsequent pregnancy must be considered when choosing a method of sterilization. PMID:21775842
Gariepy, Aileen M; Creinin, Mitchell D; Schwarz, Eleanor B; Smith, Kenneth J
2011-08-01
To estimate the probability of successful sterilization after an hysteroscopic or laparoscopic sterilization procedure. An evidence-based clinical decision analysis using a Markov model was performed to estimate the probability of a successful sterilization procedure using laparoscopic sterilization, hysteroscopic sterilization in the operating room, and hysteroscopic sterilization in the office. Procedure and follow-up testing probabilities for the model were estimated from published sources. In the base case analysis, the proportion of women having a successful sterilization procedure on the first attempt is 99% for laparoscopic sterilization, 88% for hysteroscopic sterilization in the operating room, and 87% for hysteroscopic sterilization in the office. The probability of having a successful sterilization procedure within 1 year is 99% with laparoscopic sterilization, 95% for hysteroscopic sterilization in the operating room, and 94% for hysteroscopic sterilization in the office. These estimates for hysteroscopic success include approximately 6% of women who attempt hysteroscopically but are ultimately sterilized laparoscopically. Approximately 5% of women who have a failed hysteroscopic attempt decline further sterilization attempts. Women choosing laparoscopic sterilization are more likely than those choosing hysteroscopic sterilization to have a successful sterilization procedure within 1 year. However, the risk of failed sterilization and subsequent pregnancy must be considered when choosing a method of sterilization.
Composition of web services using Markov decision processes and dynamic programming.
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.
Robust path planning for flexible needle insertion using Markov decision processes.
Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong
2018-05-11
Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.
Hosaka, Hiromi; Aoyagi, Kakuro; Kaga, Yoshimi; Kanemura, Hideaki; Sugita, Kanji; Aihara, Masao
2017-08-01
Autonomic nervous system activity is recognized as a major component of emotional responses. Future reward/punishment expectations depend upon the process of decision making in the frontal lobe, which is considered to play an important role in executive function. The aim of this study was to investigate the relationship between autonomic responses and decision making during reinforcement tasks using sympathetic skin responses (SSR). Nine adult and 9 juvenile (mean age, 10.2years) volunteers were enrolled in this study. SSRs were measured during the Markov decision task (MDT), which is a reinforcement task. In this task, subjects must endure a small immediate loss to ultimately get a large reward. The subjects had to undergo three sets of tests and their scores in these tests were assessed and evaluated. All adults showed gradually increasing scores for the MDT from the first to third set. As the trial progressed from the first to second set in adults, SSR appearance ratios remarkably increased for both punishment and reward expectations. In comparison with adults, children showed decreasing scores from the first to second set. There were no significant inter-target differences in the SSR appearance ratio in the first and second set in children. In the third set, the SSR appearance ratio for reward expectations was higher than that in the neutral condition. In reinforcement tasks, such as MDT, autonomic responses play an important role in decision making. We assume that SSRs are elicited during efficient decision making tasks associated with future reward/punishment expectations, which demonstrates the importance of autonomic function. In contrast, in children around the age of 10years, the autonomic system does not react as an organized response specific to reward/punishment expectations. This suggests the immaturity of the future reward/punishment expectations process in children. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Dai, Qi; Yang, Yanchun; Wang, Tianming
2008-10-15
Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.
Modular techniques for dynamic fault-tree analysis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
Carnero, María Carmen; Gómez, Andrés
2016-04-23
Healthcare organizations have far greater maintenance needs for their medical equipment than other organization, as many are used directly with patients. However, the literature on asset management in healthcare organizations is very limited. The aim of this research is to provide more rational application of maintenance policies, leading to an increase in quality of care. This article describes a multicriteria decision-making approach which integrates Markov chains with the multicriteria Measuring Attractiveness by a Categorical Based Evaluation Technique (MACBETH), to facilitate the best choice of combination of maintenance policies by using the judgements of a multi-disciplinary decision group. The proposed approach takes into account the level of acceptance that a given alternative would have among professionals. It also takes into account criteria related to cost, quality of care and impact of care cover. This multicriteria approach is applied to four dialysis subsystems: patients infected with hepatitis C, infected with hepatitis B, acute and chronic; in all cases, the maintenance strategy obtained consists of applying corrective and preventive maintenance plus two reserve machines. The added value in decision-making practices from this research comes from: (i) integrating the use of Markov chains to obtain the alternatives to be assessed by a multicriteria methodology; (ii) proposing the use of MACBETH to make rational decisions on asset management in healthcare organizations; (iii) applying the multicriteria approach to select a set or combination of maintenance policies in four dialysis subsystems of a health care organization. In the multicriteria decision making approach proposed, economic criteria have been used, related to the quality of care which is desired for patients (availability), and the acceptance that each alternative would have considering the maintenance and healthcare resources which exist in the organization, with the inclusion of a decision-making group. This approach is better suited to actual health care organization practice and depending on the subsystem analysed, improvements are introduced that are not included in normal maintenance policies; in this way, not only have different maintenance policies been suggested, but also alternatives that, in each case and according to viability, provide a more complete decision tool for the maintenance manager.
Analysis and design of a second-order digital phase-locked loop
NASA Technical Reports Server (NTRS)
Blasche, P. R.
1979-01-01
A specific second-order digital phase-locked loop (DPLL) was modeled as a first-order Markov chain with alternatives. From the matrix of transition probabilities of the Markov chain, the steady-state phase error of the DPLL was determined. In a similar manner the loop's response was calculated for a fading input. Additionally, a hardware DPLL was constructed and tested to provide a comparison to the results obtained from the Markov chain model. In all cases tested, good agreement was found between the theoretical predictions and the experimental data.
Quantum decision-maker theory and simulation
NASA Astrophysics Data System (ADS)
Zak, Michail; Meyers, Ronald E.; Deacon, Keith S.
2000-07-01
A quantum device simulating the human decision making process is introduced. It consists of quantum recurrent nets generating stochastic processes which represent the motor dynamics, and of classical neural nets describing the evolution of probabilities of these processes which represent the mental dynamics. The autonomy of the decision making process is achieved by a feedback from the mental to motor dynamics which changes the stochastic matrix based upon the probability distribution. This feedback replaces unavailable external information by an internal knowledge- base stored in the mental model in the form of probability distributions. As a result, the coupled motor-mental dynamics is described by a nonlinear version of Markov chains which can decrease entropy without an external source of information. Applications to common sense based decisions as well as to evolutionary games are discussed. An example exhibiting self-organization is computed using quantum computer simulation. Force on force and mutual aircraft engagements using the quantum decision maker dynamics are considered.
ERIC Educational Resources Information Center
Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.
1994-01-01
Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)
Improving Markov Chain Models for Road Profiles Simulation via Definition of States
2012-04-01
wavelet transform in pavement profile analysis," Vehicle System Dynamics: International Journal of Vehicle Mechanics and Mobility, vol. 47, no. 4...34Estimating Markov Transition Probabilities from Micro -Unit Data," Journal of the Royal Statistical Society. Series C (Applied Statistics), pp. 355-371
Guédon, Yann; d'Aubenton-Carafa, Yves; Thermes, Claude
2006-03-01
The most commonly used models for analysing local dependencies in DNA sequences are (high-order) Markov chains. Incorporating knowledge relative to the possible grouping of the nucleotides enables to define dedicated sub-classes of Markov chains. The problem of formulating lumpability hypotheses for a Markov chain is therefore addressed. In the classical approach to lumpability, this problem can be formulated as the determination of an appropriate state space (smaller than the original state space) such that the lumped chain defined on this state space retains the Markov property. We propose a different perspective on lumpability where the state space is fixed and the partitioning of this state space is represented by a one-to-many probabilistic function within a two-level stochastic process. Three nested classes of lumped processes can be defined in this way as sub-classes of first-order Markov chains. These lumped processes enable parsimonious reparameterizations of Markov chains that help to reveal relevant partitions of the state space. Characterizations of the lumped processes on the original transition probability matrix are derived. Different model selection methods relying either on hypothesis testing or on penalized log-likelihood criteria are presented as well as extensions to lumped processes constructed from high-order Markov chains. The relevance of the proposed approach to lumpability is illustrated by the analysis of DNA sequences. In particular, the use of lumped processes enables to highlight differences between intronic sequences and gene untranslated region sequences.
NASA Astrophysics Data System (ADS)
Mitsutake, Ayori; Takano, Hiroshi
2015-09-01
It is important to extract reaction coordinates or order parameters from protein simulations in order to investigate the local minimum-energy states and the transitions between them. The most popular method to obtain such data is principal component analysis, which extracts modes of large conformational fluctuations around an average structure. We recently applied relaxation mode analysis for protein systems, which approximately estimates the slow relaxation modes and times from a simulation and enables investigations of the dynamic properties underlying the structural fluctuations of proteins. In this study, we apply this relaxation mode analysis to extract reaction coordinates for a system in which there are large conformational changes such as those commonly observed in protein folding/unfolding. We performed a 750-ns simulation of chignolin protein near its folding transition temperature and observed many transitions between the most stable, misfolded, intermediate, and unfolded states. We then applied principal component analysis and relaxation mode analysis to the system. In the relaxation mode analysis, we could automatically extract good reaction coordinates. The free-energy surfaces provide a clearer understanding of the transitions not only between local minimum-energy states but also between the folded and unfolded states, even though the simulation involved large conformational changes. Moreover, we propose a new analysis method called Markov state relaxation mode analysis. We applied the new method to states with slow relaxation, which are defined by the free-energy surface obtained in the relaxation mode analysis. Finally, the relaxation times of the states obtained with a simple Markov state model and the proposed Markov state relaxation mode analysis are compared and discussed.
Shehla, Romana; Khan, Athar Ali
2016-01-01
Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.
Cabrera, V E
2012-08-01
This study contributes to the research literature by providing a new formulation for the cow replacement problem, and it also contributes to the Extension deliverables by providing a user-friendly decision support system tool that would more likely be adopted and applied for practical decision making. The cow value, its related values of a new pregnancy and a pregnancy loss, and their associated replacement policies determine profitability in dairy farming. One objective of this study was to present a simple, interactive, dynamic, and robust formulation of the cow value and the replacement problem, including expectancy of the future production of the cow and the genetic gain of the replacement. The proven hypothesis of this study was that all the above requirements could be achieved by using a Markov chain algorithm. The Markov chain model allowed (1) calculation of a forward expected value of a studied cow and its replacement; (2) use of a single model (the Markov chain) to calculate both the replacement policies and the herd statistics; (3) use of a predefined, preestablished farm reproductive replacement policy; (4) inclusion of a farmer's assessment of the expected future performance of a cow; (5) inclusion of a farmer's assessment of genetic gain with a replacement; and (6) use of a simple spreadsheet or an online system to implement the decision support system. Results clearly demonstrated that the decision policies found with the Markov chain model were consistent with more complex dynamic programming models. The final user-friendly decision support tool is available at http://dairymgt.info/ → Tools → The Economic Value of a Dairy Cow. This tool calculates the cow value instantaneously and is highly interactive, dynamic, and robust. When a Wisconsin dairy farm was studied using the model, the solution policy called for replacing nonpregnant cows 11 mo after calving or months in milk (MIM) if in the first lactation and 9 MIM if in later lactations. The cow value for an average second-lactation cow was as follows: (1) when nonpregnant, (a) $897 in MIM = 1 and (b) $68 in MIM = 8; (2) when the cow just became pregnant,(a) $889 for a pregnancy in MIM = 3 and (b) $298 for a pregnancy in MIM = 8; and (3) the value of a pregnancy loss when a cow became pregnant in MIM = 5 was (a) $221 when the loss was in the first month of pregnancy and (b) $897 when the loss was in the ninth month of pregnancy. The cow value indicated pregnant cows should be kept. The expected future production of a cow with respect to a similar average cow was an important determinant in the cow replacement decision. The expected production in the rest of the lactation was more important for nonpregnant cows, and the expected production in successive lactations was more important for pregnant cows. A 120% expected milk production for a cow with MIM = 16 and 6 mo pregnant in the present lactation or in successive lactations determined between 1.52 and 6.48 times the cow value, respectively, of an average production cow. The cow value decreased by $211 for every 1 percentage point of expected genetic gain of the replacement. A break-even analysis of the cow value with respect to expected milk production of an average second-parity cow indicated that (1) nonpregnant cows in MIM = 1 and 8 could still remain in the herd if they produced at least 84 and 98% in the present lactation or if they produced at least 78 and 97% in future lactations, respectively; and (2) cows becoming pregnant in MIM = 5 would require at least 64% of milk production in the rest of the lactation or 93% in successive lactations to remain in the herd. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H
2015-01-01
With intensively collected longitudinal data, recent advances in the experience-sampling method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal the relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet, & Dube, 2011) that observed 160 participants' food consumption and momentary emotions 6 times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal-healthiness decision, the proposed reciprocal Markov model (RMM) can accommodate both hidden ("general" emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent with the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters.
Decision-Theoretic Control of Planetary Rovers
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo; Washington, Richard; Bernstein, Daniel S.; Mouaddib, Abdel-Illah; Morris, Robert (Technical Monitor)
2003-01-01
Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We describe two decision-theoretic approaches to maximize the productivity of planetary rovers: one based on adaptive planning and the other on hierarchical reinforcement learning. Both approaches map the problem into a Markov decision problem and attempt to solve a large part of the problem off-line, exploiting the structure of the plan and independence between plan components. We examine the advantages and limitations of these techniques and their scalability.
Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M
2018-07-01
Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1995-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) (vertical bar)/x), 1 less than or equal to i isless than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Hidden Markov models for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic J. (Inventor)
1993-01-01
The invention is a system failure monitoring method and apparatus which learns the symptom-fault mapping directly from training data. The invention first estimates the state of the system at discrete intervals in time. A feature vector x of dimension k is estimated from sets of successive windows of sensor data. A pattern recognition component then models the instantaneous estimate of the posterior class probability given the features, p(w(sub i) perpendicular to x), 1 less than or equal to i is less than or equal to m. Finally, a hidden Markov model is used to take advantage of temporal context and estimate class probabilities conditioned on recent past history. In this hierarchical pattern of information flow, the time series data is transformed and mapped into a categorical representation (the fault classes) and integrated over time to enable robust decision-making.
Predicting Loss-of-Control Boundaries Toward a Piloting Aid
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.
Cost-Utility Analysis of Bariatric Surgery in Italy: Results of Decision-Analytic Modelling.
Lucchese, Marcello; Borisenko, Oleg; Mantovani, Lorenzo Giovanni; Cortesi, Paolo Angelo; Cesana, Giancarlo; Adam, Daniel; Burdukova, Elisabeth; Lukyanov, Vasily; Di Lorenzo, Nicola
2017-01-01
To evaluate the cost-effectiveness of bariatric surgery in Italy from a third-party payer perspective over a medium-term (10 years) and a long-term (lifetime) horizon. A state-transition Markov model was developed, in which patients may experience surgery, post-surgery complications, diabetes mellitus type 2, cardiovascular diseases or die. Transition probabilities, costs, and utilities were obtained from the Italian and international literature. Three types of surgeries were considered: gastric bypass, sleeve gastrectomy, and adjustable gastric banding. A base-case analysis was performed for the population, the characteristics of which were obtained from surgery candidates in Italy. In the base-case analysis, over 10 years, bariatric surgery led to cost increment of EUR 2,661 and generated additional 1.1 quality-adjusted life years (QALYs). Over a lifetime, surgery led to savings of EUR 8,649, additional 0.5 life years and 3.2 QALYs. Bariatric surgery was cost-effective at 10 years with an incremental cost-effectiveness ratio of EUR 2,412/QALY and dominant over conservative management over a lifetime. In a comprehensive decision analytic model, a current mix of surgical methods for bariatric surgery was cost-effective at 10 years and cost-saving over the lifetime of the Italian patient cohort considered in this analysis. © 2017 The Author(s) Published by S. Karger GmbH, Freiburg.
Gervais, Debra A.; Hartman, Rebecca I.; Harisinghani, Mukesh G.; Feldman, Adam S.; Mueller, Peter R.; Gazelle, G. Scott
2010-01-01
Purpose: To evaluate the effectiveness, cost, and cost-effectiveness of using renal mass biopsy to guide treatment decisions for small incidentally detected renal tumors. Materials and Methods: A decision-analytic Markov model was developed to estimate life expectancy and lifetime costs for patients with small (≤4-cm) renal tumors. Two strategies were compared: renal mass biopsy to triage patients to surgery or imaging surveillance and empiric nephron-sparing surgery. The model incorporated biopsy performance, the probability of track seeding with malignant cells, the prevalence and growth of benign and malignant tumors, treatment effectiveness and costs, and patient outcomes. An incremental cost-effectiveness analysis was performed to identify strategy preference under a willingness-to-pay threshold of $75 000 per quality-adjusted life-year (QALY). Effects of changes in key parameters on strategy preference were evaluated in sensitivity analysis. Results: Under base-case assumptions, the biopsy strategy yielded a minimally greater quality-adjusted life expectancy (4 days) than did empiric surgery at a lower lifetime cost ($3466), dominating surgery from a cost-effectiveness perspective. Over the majority of parameter ranges tested in one-way sensitivity analysis, the biopsy strategy dominated surgery or was cost-effective relative to surgery based on a $75 000-per-QALY willingness-to-pay threshold. In two-way sensitivity analysis, surgery yielded greater life expectancy when the prevalence of malignancy and propensity for biopsy-negative cancers to metastasize were both higher than expected or when the sensitivity and specificity of biopsy were both lower than expected. Conclusion: The use of biopsy to guide treatment decisions for small incidentally detected renal tumors is cost-effective and can prevent unnecessary surgery in many cases. © RSNA, 2010 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10092013/-/DC1 PMID:20720070
Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.
2015-12-01
This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.
Predictive Rate-Distortion for Infinite-Order Markov Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2016-06-01
Predictive rate-distortion analysis suffers from the curse of dimensionality: clustering arbitrarily long pasts to retain information about arbitrarily long futures requires resources that typically grow exponentially with length. The challenge is compounded for infinite-order Markov processes, since conditioning on finite sequences cannot capture all of their past dependencies. Spectral arguments confirm a popular intuition: algorithms that cluster finite-length sequences fail dramatically when the underlying process has long-range temporal correlations and can fail even for processes generated by finite-memory hidden Markov models. We circumvent the curse of dimensionality in rate-distortion analysis of finite- and infinite-order processes by casting predictive rate-distortion objective functions in terms of the forward- and reverse-time causal states of computational mechanics. Examples demonstrate that the resulting algorithms yield substantial improvements.
Application of Markov Models for Analysis of Development of Psychological Characteristics
ERIC Educational Resources Information Center
Kuravsky, Lev S.; Malykh, Sergey B.
2004-01-01
A technique to study combined influence of environmental and genetic factors on the base of changes in phenotype distributions is presented. Histograms are exploited as base analyzed characteristics. A continuous time, discrete state Markov process with piece-wise constant interstate transition rates is associated with evolution of each histogram.…
Nested Fork-Join Queuing Networks and Their Application to Mobility Airfield Operations Analysis.
1997-03-01
shortest queue length. Setia , Squillante, and Tripathi [109] extend Makowski and Nelson’s work by performing a quantitative assessment of a range of...Markov chains." Numerical Solution of Markov Chains, edited by W. J. Stewart, 63- 88. Basel: Marcel Dekker, 1991. [109] Setia , S. K., and others
A Test of the Need Hierarchy Concept by a Markov Model of Change in Need Strength.
ERIC Educational Resources Information Center
Rauschenberger, John; And Others
1980-01-01
In this study of 547 high school graduates, Alderfer's and Maslow's need hierarchy theories were expressed in Markov chain form and were subjected to empirical test. Both models were disconfirmed. Corroborative multiwave correlational analysis also failed to support the need hierarchy concept. (Author/IRT)
UMAP Modules-Units 105, 107-109, 111-112, 158-162.
ERIC Educational Resources Information Center
Keller, Mary K.; And Others
This collection of materials includes six units dealing with applications of matrix methods. These are: 105-Food Service Management; 107-Markov Chains; 108-Electrical Circuits; 109-Food Service and Dietary Requirements; 111-Fixed Point and Absorbing Markov Chains; and 112-Analysis of Linear Circuits. The units contain exercises and model exams,…
ERIC Educational Resources Information Center
Tokac, Umit
2016-01-01
The dissertation explored the efficacy of using a POMDP to select and apply appropriate instruction. POMDPs are a tool for planning: selecting a sequence of actions that will lead to an optimal outcome. RTI is an approach to instruction, where teachers craft individual plans for students based on the results of screening test. The goal is to…
Mo Zhou; Joseph Buongiorno; Jingjing Liang
2012-01-01
Besides the market value of timber, forests provide substantial nonmarket benefits, especially with continuous-cover silviculture, which have long been acknowledged by forest managers. They include wildlife habitat (e.g. Bevers and Hof 1999), carbon sequestration (e.g. Dewar and Cannell 1992), biodiversity (e.g. Kangas and Kuusipalo 1993; Austin and Meyers 1999),...
Time-varying nonstationary multivariate risk analysis using a dynamic Bayesian copula
NASA Astrophysics Data System (ADS)
Sarhadi, Ali; Burn, Donald H.; Concepción Ausín, María.; Wiper, Michael P.
2016-03-01
A time-varying risk analysis is proposed for an adaptive design framework in nonstationary conditions arising from climate change. A Bayesian, dynamic conditional copula is developed for modeling the time-varying dependence structure between mixed continuous and discrete multiattributes of multidimensional hydrometeorological phenomena. Joint Bayesian inference is carried out to fit the marginals and copula in an illustrative example using an adaptive, Gibbs Markov Chain Monte Carlo (MCMC) sampler. Posterior mean estimates and credible intervals are provided for the model parameters and the Deviance Information Criterion (DIC) is used to select the model that best captures different forms of nonstationarity over time. This study also introduces a fully Bayesian, time-varying joint return period for multivariate time-dependent risk analysis in nonstationary environments. The results demonstrate that the nature and the risk of extreme-climate multidimensional processes are changed over time under the impact of climate change, and accordingly the long-term decision making strategies should be updated based on the anomalies of the nonstationary environment.
Rueda, Oscar M; Diaz-Uriarte, Ramon
2007-10-16
Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).
Markov Chain Model with Catastrophe to Determine Mean Time to Default of Credit Risky Assets
NASA Astrophysics Data System (ADS)
Dharmaraja, Selvamuthu; Pasricha, Puneet; Tardelli, Paola
2017-11-01
This article deals with the problem of probabilistic prediction of the time distance to default for a firm. To model the credit risk, the dynamics of an asset is described as a function of a homogeneous discrete time Markov chain subject to a catastrophe, the default. The behaviour of the Markov chain is investigated and the mean time to the default is expressed in a closed form. The methodology to estimate the parameters is given. Numerical results are provided to illustrate the applicability of the proposed model on real data and their analysis is discussed.
Markov Analysis of Sleep Dynamics
NASA Astrophysics Data System (ADS)
Kim, J. W.; Lee, J.-S.; Robinson, P. A.; Jeong, D.-U.
2009-05-01
A new approach, based on a Markov transition matrix, is proposed to explain frequent sleep and wake transitions during sleep. The matrix is determined by analyzing hypnograms of 113 obstructive sleep apnea patients. Our approach shows that the statistics of sleep can be constructed via a single Markov process and that durations of all states have modified exponential distributions, in contrast to recent reports of a scale-free form for the wake stage and an exponential form for the sleep stage. Hypnograms of the same subjects, but treated with Continuous Positive Airway Pressure, are analyzed and compared quantitatively with the pretreatment ones, suggesting potential clinical applications.
NASA Astrophysics Data System (ADS)
Naseri Kouzehgarani, Asal
2009-12-01
Most models of aircraft trajectories are non-linear and stochastic in nature; and their internal parameters are often poorly defined. The ability to model, simulate and analyze realistic air traffic management conflict detection scenarios in a scalable, composable, multi-aircraft fashion is an extremely difficult endeavor. Accurate techniques for aircraft mode detection are critical in order to enable the precise projection of aircraft conflicts, and for the enactment of altitude separation resolution strategies. Conflict detection is an inherently probabilistic endeavor; our ability to detect conflicts in a timely and accurate manner over a fixed time horizon is traded off against the increased human workload created by false alarms---that is, situations that would not develop into an actual conflict, or would resolve naturally in the appropriate time horizon-thereby introducing a measure of probabilistic uncertainty in any decision aid fashioned to assist air traffic controllers. The interaction of the continuous dynamics of the aircraft, used for prediction purposes, with the discrete conflict detection logic gives rise to the hybrid nature of the overall system. The introduction of the probabilistic element, common to decision alerting and aiding devices, places the conflict detection and resolution problem in the domain of probabilistic hybrid phenomena. A hidden Markov model (HMM) has two stochastic components: a finite-state Markov chain and a finite set of output probability distributions. In other words an unobservable stochastic process (hidden) that can only be observed through another set of stochastic processes that generate the sequence of observations. The problem of self separation in distributed air traffic management reduces to the ability of aircraft to communicate state information to neighboring aircraft, as well as model the evolution of aircraft trajectories between communications, in the presence of probabilistic uncertain dynamics as well as partially observable and uncertain data. We introduce the Hybrid Hidden Markov Modeling (HHMM) formalism to enable the prediction of the stochastic aircraft states (and thus, potential conflicts), by combining elements of the probabilistic timed input output automaton and the partially observable Markov decision process frameworks, along with the novel addition of a Markovian scheduler to remove the non-deterministic elements arising from the enabling of several actions simultaneously. Comparisons of aircraft in level, climbing/descending and turning flight are performed, and unknown flight track data is evaluated probabilistically against the tuned model in order to assess the effectiveness of the model in detecting the switch between multiple flight modes for a given aircraft. This also allows for the generation of probabilistic distribution over the execution traces of the hybrid hidden Markov model, which then enables the prediction of the states of aircraft based on partially observable and uncertain data. Based on the composition properties of the HHMM, we study a decentralized air traffic system where aircraft are moving along streams and can perform cruise, accelerate, climb and turn maneuvers. We develop a common decentralized policy for conflict avoidance with spatially distributed agents (aircraft in the sky) and assure its safety properties via correctness proofs.
Three real-time architectures - A study using reward models
NASA Technical Reports Server (NTRS)
Sjogren, J. A.; Smith, R. M.
1990-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the evolutionary behavior of the computer system by a continuous-time Markov chain, and a reward rate is associated with each state. In reliability/availability models, upstates have reward rate 1, and down states have reward rate zero associated with them. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Steady-state expected reward rate and expected instantaneous reward rate are clearly useful measures which can be extracted from the Markov reward model. The diversity of areas where Markov reward models may be used is illustrated with a comparative study of three examples of interest to the fault tolerant computing community.
NASA Technical Reports Server (NTRS)
Gai, E. G.; Curry, R. E.
1978-01-01
An investigation of the behavior of the human decisionmaker is described for a task related to the problem of a pilot using a traffic situation display to avoid collisions. This sequential signal detection task is characterized by highly correlated signals with time varying strength. Experimental results are presented and the behavior of the observers is analyzed using the theory of Markov processes and classical signal detection theory. Mathematical models are developed which describe the main result of the experiment: that correlation in sequential signals induced perseveration in the observer response and a strong tendency to repeat their previous decision, even when they were wrong.
Falling Person Detection Using Multi-Sensor Signal Processing
NASA Astrophysics Data System (ADS)
Toreyin, B. Ugur; Soyer, A. Birey; Onaran, Ibrahim; Cetin, E. Enis
2007-12-01
Falls are one of the most important problems for frail and elderly people living independently. Early detection of falls is vital to provide a safe and active lifestyle for elderly. Sound, passive infrared (PIR) and vibration sensors can be placed in a supportive home environment to provide information about daily activities of an elderly person. In this paper, signals produced by sound, PIR and vibration sensors are simultaneously analyzed to detect falls. Hidden Markov Models are trained for regular and unusual activities of an elderly person and a pet for each sensor signal. Decisions of HMMs are fused together to reach a final decision.
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
The management of patients with T1 adenocarcinoma of the low rectum: a decision analysis.
Johnston, Calvin F; Tomlinson, George; Temple, Larissa K; Baxter, Nancy N
2013-04-01
Decision making for patients with T1 adenocarcinoma of the low rectum, when treatment options are limited to a transanal local excision or abdominoperineal resection, is challenging. The aim of this study was to develop a contemporary decision analysis to assist patients and clinicians in balancing the goals of maximizing life expectancy and quality of life in this situation. We constructed a Markov-type microsimulation in open-source software. Recurrence rates and quality-of-life parameters were elicited by systematic literature reviews. Sensitivity analyses were performed on key model parameters. Our base case for analysis was a 65-year-old man with low-lying T1N0 rectal cancer. We determined the sensitivity of our model for sex, age up to 80, and T stage. The main outcome measured was quality-adjusted life-years. In the base case, selecting transanal local excision over abdominoperineal resection resulted in a loss of 0.53 years of life expectancy but a gain of 0.97 quality-adjusted life-years. One-way sensitivity analysis demonstrated a health state utility value threshold for permanent colostomy of 0.93. This value ranged from 0.88 to 1.0 based on tumor recurrence risk. There were no other model sensitivities. Some model parameter estimates were based on weak data. In our model, transanal local excision was found to be the preferable approach for most patients. An abdominoperineal resection has a 3.5% longer life expectancy, but this advantage is lost when the quality-of-life reduction reported by stoma patients is weighed in. The minority group in whom abdominoperineal resection is preferred are those who are unwilling to sacrifice 7% of their life expectancy to avoid a permanent stoma. This is estimated to be approximately 25% of all patients. The threshold increases to 12% of life expectancy in high-risk tumors. No other factors are found to be relevant to the decision.
Multiensemble Markov models of molecular thermodynamics and kinetics.
Wu, Hao; Paul, Fabian; Wehmeyer, Christoph; Noé, Frank
2016-06-07
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models-clustering of high-dimensional spaces and modeling of complex many-state systems-with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein-ligand binding model.
Multiensemble Markov models of molecular thermodynamics and kinetics
Wu, Hao; Paul, Fabian; Noé, Frank
2016-01-01
We introduce the general transition-based reweighting analysis method (TRAM), a statistically optimal approach to integrate both unbiased and biased molecular dynamics simulations, such as umbrella sampling or replica exchange. TRAM estimates a multiensemble Markov model (MEMM) with full thermodynamic and kinetic information at all ensembles. The approach combines the benefits of Markov state models—clustering of high-dimensional spaces and modeling of complex many-state systems—with those of the multistate Bennett acceptance ratio of exploiting biased or high-temperature ensembles to accelerate rare-event sampling. TRAM does not depend on any rate model in addition to the widely used Markov state model approximation, but uses only fundamental relations such as detailed balance and binless reweighting of configurations between ensembles. Previous methods, including the multistate Bennett acceptance ratio, discrete TRAM, and Markov state models are special cases and can be derived from the TRAM equations. TRAM is demonstrated by efficiently computing MEMMs in cases where other estimators break down, including the full thermodynamics and rare-event kinetics from high-dimensional simulation data of an all-atom protein–ligand binding model. PMID:27226302
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
Learning Instance-Specific Predictive Models
Visweswaran, Shyam; Cooper, Gregory F.
2013-01-01
This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325
Multistage Fuzzy Decision Making in Bilateral Negotiation with Finite Termination Times
NASA Astrophysics Data System (ADS)
Richter, Jan; Kowalczyk, Ryszard; Klusch, Matthias
In this paper we model the negotiation process as a multistage fuzzy decision problem where the agents preferences are represented by a fuzzy goal and fuzzy constraints. The opponent is represented by a fuzzy Markov decision process in the form of offer-response patterns which enables utilization of limited and uncertain information, e.g. the characteristics of the concession behaviour. We show that we can obtain adaptive negotiation strategies by only using the negotiation threads of two past cases to create and update the fuzzy transition matrix. The experimental evaluation demonstrates that our approach is adaptive towards different negotiation behaviours and that the fuzzy representation of the preferences and the transition matrix allows for application in many scenarios where the available information, preferences and constraints are soft or imprecise.
Prosthetic Cost Projections for Servicemembers with Major Limb Loss from Vietnam and OIF/OEF
2010-01-01
death rates ), DOD = Department of Defense, DSS = Decision Support Sys- tem, MFCL = Medicare Functional Classification Level, OEF = Operation...age-sex-race-adjusted death rates . Figure 3. Markov model for unilateral upper limb and bilateral upper limbs for Operation Iraqi Freedom...Operation Enduring Freedom (OIF/OEF) group. ASR = age-sex-race-adjusted death rates . 394 JRRD, Volume 47, Number 4, 2010 higher, one level lower, or
NASA Technical Reports Server (NTRS)
Buntine, Wray
1994-01-01
IND computer program introduces Bayesian and Markov/maximum-likelihood (MML) methods and more-sophisticated methods of searching in growing trees. Produces more-accurate class-probability estimates important in applications like diagnosis. Provides range of features and styles with convenience for casual user, fine-tuning for advanced user or for those interested in research. Consists of four basic kinds of routines: data-manipulation, tree-generation, tree-testing, and tree-display. Written in C language.
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
Composition of Web Services Using Markov Decision Processes and Dynamic Programming
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247
Modelisation de l'historique d'operation de groupes turbine-alternateur
NASA Astrophysics Data System (ADS)
Szczota, Mickael
Because of their ageing fleet, the utility managers are increasingly in needs of tools that can help them to plan efficiently maintenance operations. Hydro-Quebec started a project that aim to foresee the degradation of their hydroelectric runner, and use that information to classify the generating unit. That classification will help to know which generating unit is more at risk to undergo a major failure. Cracks linked to the fatigue phenomenon are a predominant degradation mode and the loading sequences applied to the runner is a parameter impacting the crack growth. So, the aim of this memoir is to create a generator able to generate synthetic loading sequences that are statistically equivalent to the observed history. Those simulated sequences will be used as input in a life assessment model. At first, we describe how the generating units are operated by Hydro-Quebec and analyse the available data, the analysis shows that the data are non-stationnary. Then, we review modelisation and validation methods. In the following chapter a particular attention is given to a precise description of the validation and comparison procedure. Then, we present the comparison of three kind of model : Discrete Time Markov Chains, Discrete Time Semi-Markov Chains and the Moving Block Bootstrap. For the first two models, we describe how to take account for the non-stationnarity. Finally, we show that the Markov Chain is not adapted for our case, and that the Semi-Markov chains are better when they include the non-stationnarity. The final choice between Semi-Markov Chains and the Moving Block Bootstrap depends of the user. But, with a long term vision we recommend the use of Semi-Markov chains for their flexibility. Keywords: Stochastic models, Models validation, Reliability, Semi-Markov Chains, Markov Chains, Bootstrap
Robertson, Colin; Sawford, Kate; Gunawardana, Walimunige S. N.; Nelson, Trisalyn A.; Nathoo, Farouk; Stephen, Craig
2011-01-01
Surveillance systems tracking health patterns in animals have potential for early warning of infectious disease in humans, yet there are many challenges that remain before this can be realized. Specifically, there remains the challenge of detecting early warning signals for diseases that are not known or are not part of routine surveillance for named diseases. This paper reports on the development of a hidden Markov model for analysis of frontline veterinary sentinel surveillance data from Sri Lanka. Field veterinarians collected data on syndromes and diagnoses using mobile phones. A model for submission patterns accounts for both sentinel-related and disease-related variability. Models for commonly reported cattle diagnoses were estimated separately. Region-specific weekly average prevalence was estimated for each diagnoses and partitioned into normal and abnormal periods. Visualization of state probabilities was used to indicate areas and times of unusual disease prevalence. The analysis suggests that hidden Markov modelling is a useful approach for surveillance datasets from novel populations and/or having little historical baselines. PMID:21949763
Estimation in a semi-Markov transformation model
Dabrowska, Dorota M.
2012-01-01
Multi-state models provide a common tool for analysis of longitudinal failure time data. In biomedical applications, models of this kind are often used to describe evolution of a disease and assume that patient may move among a finite number of states representing different phases in the disease progression. Several authors developed extensions of the proportional hazard model for analysis of multi-state models in the presence of covariates. In this paper, we consider a general class of censored semi-Markov and modulated renewal processes and propose the use of transformation models for their analysis. Special cases include modulated renewal processes with interarrival times specified using transformation models, and semi-Markov processes with with one-step transition probabilities defined using copula-transformation models. We discuss estimation of finite and infinite dimensional parameters of the model, and develop an extension of the Gaussian multiplier method for setting confidence bands for transition probabilities. A transplant outcome data set from the Center for International Blood and Marrow Transplant Research is used for illustrative purposes. PMID:22740583
Bhavnani, Vanita; Clarke, Aileen; Dowie, Jack; Kennedy, Andrew; Pell, Ian
2002-01-01
Abstract Introduction A qualitative pilot evaluation of two different decision interventions for the prophylactic oophorectomy (PO) decision: a Decision Chart and a computerized clinical guidance programme (CGP) was undertaken. The Decision Chart, representing current practice in decision interventions, presents population‐based information. The CGP elicits individual values to allow for quality‐adjusted life years to be calculated and an explicit guidance statement is given. Prophylactic oophorectomy involves removal of the ovaries as an adjunct to hysterectomy to prevent ovarian cancer. The decision is complex because the operation can affect a number of long‐term outcomes including breast cancer, coronary heart disease and osteoporosis. Methods Both interventions were based on the evidence and were administered by a facilitator. The Decision Chart is a file, which progressively reveals information in the form of bar charts. The CGP is a decision‐analysis based program integrating the results from a cluster of Markov cycle trees. The research evidence is incorporated with woman's individual risk factors, values and preferences. A purposive sample of 19 women awaiting hysterectomy used the decision interventions (10 CGP, nine Decision Chart). In‐depth semi‐structured interviews were undertaken. Interviews were transcribed and analysed to derive themes. Results Reactions to the different decision interventions were mixed. Both were seen as clarifying the decision. Some women found some of the tasks difficult (e.g. rating health status). Some were surprised by the ‘individualized’ guidance, which the CGP offered. The Decision Chart provided some with a sense of empowerment, although some found that it provided too much information. Conclusions Women were able to use both decision interventions. Both provided decision clarification. Problems were evident with both interventions, which give useful pointers for future development. These included the possibility for women to see how their individual risks of different outcomes are affected in the Decision Chart and enhanced explanation of the CGP tasks. Future design and evaluation of decision aids, will need to accommodate differences between patients in the desire for amount and type of information and level of involvement in the decision‐making process. PMID:12031056
Cost Analysis of Ceramic Heads in Primary Total Hip Arthroplasty.
Carnes, Keith J; Odum, Susan M; Troyer, Jennifer L; Fehring, Thomas K
2016-11-02
The advent of adverse local tissue reactions seen in metal-on-metal bearings, and the recent recognition of trunnionosis, have led many surgeons to recommend ceramic-on-polyethylene articulations for primary total hip arthroplasty. However, to our knowledge, there has been little research that has considered whether the increased cost of ceramic provides enough benefit over cobalt-chromium to justify its use. The primary purpose of this study was to compare the cost-effectiveness of ceramic-on-polyethylene implants and metal-on-polyethylene implants in patients undergoing total hip arthroplasty. Markov decision modeling was used to determine the ceramic-on-polyethylene implant revision rate necessary to be cost-effective compared with the revision rate of metal-on-polyethylene implants across a range of patient ages and implant costs. A different set of Markov models was used to estimate the national cost burden of choosing ceramic-on-polyethylene implants over metal-on-polyethylene implants for primary total hip arthroplasties. The Premier Research Database was used to identify 20,398 patients who in 2012 were ≥45 years of age and underwent a total hip arthroplasty with either a ceramic-on-polyethylene implant or a metal-on-polyethylene implant. The cost-effectiveness of ceramic heads is highly dependent on the cost differential between ceramic and metal femoral heads and the age of the patient. At a cost differential of $325, ceramic-on-polyethylene bearings are cost-effective for patients <85 years of age. At a cost differential of $600, it is cost-effective to utilize ceramic-on-polyethylene bearings in patients <65 years of age, and, at a differential of $1,003, ceramic-on-polyethylene bearings are not cost-effective at any age. The ability to recoup the initial increased expenditure of ceramic heads through a diminished lifetime revision cost is dependent on the price premium for ceramic and the age of the patient. A wholesale switch to ceramic bearings regardless of age or cost differential may result in an economic burden to the health system. Economic and decision analysis, Level III. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.
Laramée, Philippe; Brodtkorb, Thor-Henrik; Rahhali, Nora; Knight, Chris; Barbosa, Carolina; François, Clément; Toumi, Mondher; Daeppen, Jean-Bernard; Rehm, Jürgen
2014-01-01
Objectives To determine whether nalmefene combined with psychosocial support is cost-effective compared with psychosocial support alone for reducing alcohol consumption in alcohol-dependent patients with high/very high drinking risk levels (DRLs) as defined by the WHO, and to evaluate the public health benefit of reducing harmful alcohol-attributable diseases, injuries and deaths. Design Decision modelling using Markov chains compared costs and effects over 5 years. Setting The analysis was from the perspective of the National Health Service (NHS) in England and Wales. Participants The model considered the licensed population for nalmefene, specifically adults with both alcohol dependence and high/very high DRLs, who do not require immediate detoxification and who continue to have high/very high DRLs after initial assessment. Data sources We modelled treatment effect using data from three clinical trials for nalmefene (ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941)). Baseline characteristics of the model population, treatment resource utilisation and utilities were from these trials. We estimated the number of alcohol-attributable events occurring at different levels of alcohol consumption based on published epidemiological risk-relation studies. Health-related costs were from UK sources. Main outcome measures We measured incremental cost per quality-adjusted life year (QALY) gained and number of alcohol-attributable harmful events avoided. Results Nalmefene in combination with psychosocial support had an incremental cost-effectiveness ratio (ICER) of £5204 per QALY gained, and was therefore cost-effective at the £20 000 per QALY gained decision threshold. Sensitivity analyses showed that the conclusion was robust. Nalmefene plus psychosocial support led to the avoidance of 7179 alcohol-attributable diseases/injuries and 309 deaths per 100 000 patients compared to psychosocial support alone over the course of 5 years. Conclusions Nalmefene can be seen as a cost-effective treatment for alcohol dependence, with substantial public health benefits. Trial registration numbers This cost-effectiveness analysis was developed based on data from three randomised clinical trials: ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941). PMID:25227627
A Langevin equation for the rates of currency exchange based on the Markov analysis
NASA Astrophysics Data System (ADS)
Farahpour, F.; Eskandari, Z.; Bahraminasab, A.; Jafari, G. R.; Ghasemi, F.; Sahimi, Muhammad; Reza Rahimi Tabar, M.
2007-11-01
We propose a method for analyzing the data for the rates of exchange of various currencies versus the U.S. dollar. The method analyzes the return time series of the data as a Markov process, and develops an effective equation which reconstructs it. We find that the Markov time scale, i.e., the time scale over which the data are Markov-correlated, is one day for the majority of the daily exchange rates that we analyze. We derive an effective Langevin equation to describe the fluctuations in the rates. The equation contains two quantities, D and D, representing the drift and diffusion coefficients, respectively. We demonstrate how the two coefficients are estimated directly from the data, without using any assumptions or models for the underlying stochastic time series that represent the daily rates of exchange of various currencies versus the U.S. dollar.
Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H.
2015-01-01
With intensively collected longitudinal data, recent advances in Experience Sampling Method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well-equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet & Dube, 2011) that observed 160 participants’ food consumption and momentary emotions six times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal healthiness decision, the proposed Reciprocal Markov Model (RMM) can accommodate both hidden (“general” emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent to the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters. PMID:26717120
An, R; Xue, H; Wang, L; Wang, Y
2017-09-22
This study aimed to project the societal cost and benefit of an expansion of a water access intervention that promotes lunchtime plain water consumption by placing water dispensers in New York school cafeterias to all schools nationwide. A decision model was constructed to simulate two events under Markov chain processes - placing water dispensers at lunchtimes in school cafeterias nationwide vs. no action. The incremental cost pertained to water dispenser purchase and maintenance, whereas the incremental benefit was resulted from cases of childhood overweight/obesity prevented and corresponding lifetime direct (medical) and indirect costs saved. Based on the decision model, the estimated incremental cost of the school-based water access intervention is $18 per student, and the corresponding incremental benefit is $192, resulting in a net benefit of $174 per student. Subgroup analysis estimates the net benefit per student to be $199 and $149 among boys and girls, respectively. Nationwide adoption of the intervention would prevent 0.57 million cases of childhood overweight, resulting in a lifetime cost saving totalling $13.1 billion. The estimated total cost saved per dollar spent was $14.5. The New York school-based water access intervention, if adopted nationwide, may have a considerably favourable benefit-cost portfolio. © 2017 World Obesity Federation.
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
Radford, Isolde H; Fersht, Alan R; Settanni, Giovanni
2011-06-09
Atomistic molecular dynamics simulations of the TZ1 beta-hairpin peptide have been carried out using an implicit model for the solvent. The trajectories have been analyzed using a Markov state model defined on the projections along two significant observables and a kinetic network approach. The Markov state model allowed for an unbiased identification of the metastable states of the system, and provided the basis for commitment probability calculations performed on the kinetic network. The kinetic network analysis served to extract the main transition state for folding of the peptide and to validate the results from the Markov state analysis. The combination of the two techniques allowed for a consistent and concise characterization of the dynamics of the peptide. The slowest relaxation process identified is the exchange between variably folded and denatured species, and the second slowest process is the exchange between two different subsets of the denatured state which could not be otherwise identified by simple inspection of the projected trajectory. The third slowest process is the exchange between a fully native and a partially folded intermediate state characterized by a native turn with a proximal backbone H-bond, and frayed side-chain packing and termini. The transition state for the main folding reaction is similar to the intermediate state, although a more native like side-chain packing is observed.
Skaar, Daniel D; Park, Taehwan; Swiontkowski, Marc F; Kuntz, Karen M
2015-11-01
Clinician uncertainty concerning the need for antibiotic prophylaxis to prevent prosthetic joint infection (PJI) after undergoing dental procedures persists. Improved understanding of the potential clinical and economic risks and benefits of antibiotic prophylaxis will help inform the debate and facilitate the continuing evolution of clinical management guidelines for dental patients with prosthetic joints. The authors developed a Markov decision model to compare the lifetime cost-effectiveness of alternative antibiotic prophylaxis strategies for dental patients aged 65 years who had undergone total hip arthroplasty (THA). On the basis of the authors' interpretation of previous recommendations from the American Dental Association and American Academy of Orthopaedic Surgeons, they compared the following strategies: no prophylaxis, prophylaxis for the first 2 years after arthroplasty, and lifetime prophylaxis. A strategy of foregoing antibiotic prophylaxis before dental visits was cost-effective and resulted in lower lifetime accumulated costs ($11,909) and higher accumulated quality-adjusted life years (QALYs) (12.375) when compared with alternative prophylaxis strategies. The results of Markov decision modeling indicated that a no-antibiotic prophylaxis strategy was cost-effective for dental patients who had undergone THA. These results support the findings of case-control studies and the conclusions of an American Dental Association Council on Scientific Affairs report that questioned general recommendations for antibiotic prophylaxis before dental procedures. The results of cost-effectiveness decision modeling support the contention that routine antibiotic prophylaxis for dental patients with total joint arthroplasty should be reconsidered. Copyright © 2015 American Dental Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Snyder, Morgan E.; Waldron, John W. F.
2018-03-01
The deformation history of the Upper Paleozoic Maritimes Basin, Atlantic Canada, can be partially unraveled by examining fractures (joints, veins, and faults) that are well exposed on the shorelines of the macrotidal Bay of Fundy, in subsurface core, and on image logs. Data were collected from coastal outcrops and well core across the Windsor-Kennetcook subbasin, a subbasin in the Maritimes Basin, using the circular scan-line and vertical scan-line methods in outcrop, and FMI Image log analysis of core. We use cross-cutting and abutting relationships between fractures to understand relative timing of fracturing, followed by a statistical test (Markov chain analysis) to separate groups of fractures. This analysis, previously used in sedimentology, was modified to statistically test the randomness of fracture timing relationships. The results of the Markov chain analysis suggest that fracture initiation can be attributed to movement along the Minas Fault Zone, an E-W fault system that bounds the Windsor-Kennetcook subbasin to the north. Four sets of fractures are related to dextral strike slip along the Minas Fault Zone in the late Paleozoic, and four sets are related to sinistral reactivation of the same boundary in the Mesozoic.
A dynamic fault tree model of a propulsion system
NASA Technical Reports Server (NTRS)
Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila
2006-01-01
We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.
Exact Test of Independence Using Mutual Information
2014-05-23
1000 × 0.05 = 50. Entropy 2014, 16 2844 Importantly, the permutation test, which does not preserve Markov order, resulted in 489 Type I errors! Using...Block 13 ARO Report Number Block 13: Supplementary Note © 2014 . Published in Entropy , Vol. Ed. 0 16, (7) (2014), (, (7). DoD Components reserve a...official Department of the Army position, policy or decision, unless so designated by other documentation. ... Entropy 2014, 16, 2839-2849; doi:10.3390
Constructing Abstraction Hierarchies Using a Skill-Symbol Loop
Konidaris, George
2017-01-01
We describe a framework for building abstraction hierarchies whereby an agent alternates skill- and representation-construction phases to construct a sequence of increasingly abstract Markov decision processes. Our formulation builds on recent results showing that the appropriate abstract representation of a problem is specified by the agent’s skills. We describe how such a hierarchy can be used for fast planning, and illustrate the construction of an appropriate hierarchy for the Taxi domain. PMID:28579718
Learning Representation and Control in Markov Decision Processes
2013-10-21
π. Figure 3 shows that Drazin bases outperforms the other bases on a two-room MDP. However, a drawback of Drazin bases is that they are...stochastic matrices. One drawback of diffusion wavelets is that it can gen- erate a large number of overcomplete bases, which needs to be effectively...proposed in [52], overcoming some of the drawbacks of LARS-TD. An approximate linear programming for finding l1 regularized solutions of the Bellman
Combining Offline and Online Computation for Solving Partially Observable Markov Decision Process
2015-03-06
David Hsu and Wee Sun Lee, Monte Carlo Bayesian Reinforcement Learning, International Conference on Machine Learning (ICML), 2012. • Haoyu Bai, David...and Automation (ICRA), 2015. • Zhan Wei Lim, David Hsu, and Wee Sun Lee, Adaptive Informative Path Planning in Metric Spaces. Submitted to Int. J... Automation (ICRA), 2015. 2. Bai, H., Hsu, D., Kochenderfer, M. J., and Lee, W. S., Unmanned aircraft collision avoidance using continuous state POMDPs
Asynchronous Gossip for Averaging and Spectral Ranking
NASA Astrophysics Data System (ADS)
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Stochastic Model of Seasonal Runoff Forecasts
NASA Astrophysics Data System (ADS)
Krzysztofowicz, Roman; Watada, Leslie M.
1986-03-01
Each year the National Weather Service and the Soil Conservation Service issue a monthly sequence of five (or six) categorical forecasts of the seasonal snowmelt runoff volume. To describe uncertainties in these forecasts for the purposes of optimal decision making, a stochastic model is formulated. It is a discrete-time, finite, continuous-space, nonstationary Markov process. Posterior densities of the actual runoff conditional upon a forecast, and transition densities of forecasts are obtained from a Bayesian information processor. Parametric densities are derived for the process with a normal prior density of the runoff and a linear model of the forecast error. The structure of the model and the estimation procedure are motivated by analyses of forecast records from five stations in the Snake River basin, from the period 1971-1983. The advantages of supplementing the current forecasting scheme with a Bayesian analysis are discussed.
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework. PMID:26543899
Rajavel, Rajkumar; Thangarathinam, Mala
2015-01-01
Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.
Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial.
Krijkamp, Eline M; Alarid-Escudero, Fernando; Enns, Eva A; Jalal, Hawre J; Hunink, M G Myriam; Pechlivanoglou, Petros
2018-04-01
Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions.
Tappenden, Paul; Chilcott, Jim; Brennan, Alan; Squires, Hazel; Glynne-Jones, Rob; Tappenden, Janine
2013-06-01
To assess the feasibility and value of simulating whole disease and treatment pathways within a single model to provide a common economic basis for informing resource allocation decisions. A patient-level simulation model was developed with the intention of being capable of evaluating multiple topics within National Institute for Health and Clinical Excellence's colorectal cancer clinical guideline. The model simulates disease and treatment pathways from preclinical disease through to detection, diagnosis, adjuvant/neoadjuvant treatments, follow-up, curative/palliative treatments for metastases, supportive care, and eventual death. The model parameters were informed by meta-analyses, randomized trials, observational studies, health utility studies, audit data, costing sources, and expert opinion. Unobservable natural history parameters were calibrated against external data using Bayesian Markov chain Monte Carlo methods. Economic analysis was undertaken using conventional cost-utility decision rules within each guideline topic and constrained maximization rules across multiple topics. Under usual processes for guideline development, piecewise economic modeling would have been used to evaluate between one and three topics. The Whole Disease Model was capable of evaluating 11 of 15 guideline topics, ranging from alternative diagnostic technologies through to treatments for metastatic disease. The constrained maximization analysis identified a configuration of colorectal services that is expected to maximize quality-adjusted life-year gains without exceeding current expenditure levels. This study indicates that Whole Disease Model development is feasible and can allow for the economic analysis of most interventions across a disease service within a consistent conceptual and mathematical infrastructure. This disease-level modeling approach may be of particular value in providing an economic basis to support other clinical guidelines. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
A cost-effectiveness analysis of screening for silent atrial fibrillation after ischaemic stroke.
Levin, Lars-Åke; Husberg, Magnus; Sobocinski, Piotr Doliwa; Kull, Viveka Frykman; Friberg, Leif; Rosenqvist, Mårten; Davidson, Thomas
2015-02-01
The purpose of this study was to estimate the cost-effectiveness of two screening methods for detection of silent AF, intermittent electrocardiogram (ECG) recordings using a handheld recording device, at regular time intervals for 30 days, and short-term 24 h continuous Holter ECG, in comparison with a no-screening alternative in 75-year-old patients with a recent ischaemic stroke. The long-term (20-year) costs and effects of all alternatives were estimated with a decision analytic model combining the result of a clinical study and epidemiological data from Sweden. The structure of a cost-effectiveness analysis was used in this study. The short-term decision tree model analysed the screening procedure until the onset of anticoagulant treatment. The second part of the decision model followed a Markov design, simulating the patients' health states for 20 years. Continuous 24 h ECG recording was inferior to intermittent ECG in terms of cost-effectiveness, due to both lower sensitivity and higher costs. The base-case analysis compared intermittent ECG screening with no screening of patients with recent stroke. The implementation of the screening programme on 1000 patients resulted over a 20-year period in 11 avoided strokes and the gain of 29 life-years, or 23 quality-adjusted life years, and cost savings of €55 400. Screening of silent AF by intermittent ECG recordings in patients with a recent ischaemic stroke is a cost-effective use of health care resources saving costs and lives and improving the quality of life. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.
A method of hidden Markov model optimization for use with geophysical data sets
NASA Technical Reports Server (NTRS)
Granat, R. A.
2003-01-01
Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.
Semi-Markov Models for Degradation-Based Reliability
2010-01-01
standard analysis techniques for Markov processes can be employed (cf. Whitt (1984), Altiok (1985), Perros (1994), and Osogami and Harchol-Balter...We want to approximate X by a PH random variable, sayY, with c.d.f. Ĥ. Marie (1980), Altiok (1985), Johnson (1993), Perros (1994), and Osogami and...provides a minimal representation when matching only two moments. By considering the guidance provided by Marie (1980), Whitt (1984), Altiok (1985), Perros
ERIC Educational Resources Information Center
Nicholls, Miles G.
2007-01-01
In this paper, absorbing markov chains are used to analyse the flows of higher degree by research candidates (doctoral and master) within an Australian faculty of business. The candidates are analysed according to whether they are full time or part time. The need for such analysis stemmed from what appeared to be a rather poor completion rate (as…
Screen or not to screen for peripheral arterial disease: guidance from a decision model.
Vaidya, Anil; Joore, Manuela A; Ten Cate-Hoek, Arina J; Ten Cate, Hugo; Severens, Johan L
2014-01-29
Asymptomatic Peripheral Arterial Disease (PAD) is associated with greater risk of acute cardiovascular events. This study aims to determine the cost-effectiveness of one time only PAD screening using Ankle Brachial Index (ABI) test and subsequent anti platelet preventive treatment (low dose aspirin or clopidogrel) in individuals at high risk for acute cardiovascular events compared to no screening and no treatment using decision analytic modelling. A probabilistic Markov model was developed to evaluate the life time cost-effectiveness of the strategy of selective PAD screening and consequent preventive treatment compared to no screening and no preventive treatment. The analysis was conducted from the Dutch societal perspective and to address decision uncertainty, probabilistic sensitivity analysis was performed. Results were based on average values of 1000 Monte Carlo simulations and using discount rates of 1.5% and 4% for effects and costs respectively. One way sensitivity analyses were performed to identify the two most influential model parameters affecting model outputs. Then, a two way sensitivity analysis was conducted for combinations of values tested for these two most influential parameters. For the PAD screening strategy, life years and quality adjusted life years gained were 21.79 and 15.66 respectively at a lifetime cost of 26,548 Euros. Compared to no screening and treatment (20.69 life years, 15.58 Quality Adjusted Life Years, 28,052 Euros), these results indicate that PAD screening and treatment is a dominant strategy. The cost effectiveness acceptability curves show 88% probability of PAD screening being cost effective at the Willingness To Pay (WTP) threshold of 40000 Euros. In a scenario analysis using clopidogrel as an alternative anti-platelet drug, PAD screening strategy remained dominant. This decision analysis suggests that targeted ABI screening and consequent secondary prevention of cardiovascular events using low dose aspirin or clopidogrel in the identified patients is a cost-effective strategy. Implementation of targeted PAD screening and subsequent treatment in primary care practices and in public health programs is likely to improve the societal health and to save health care costs by reducing catastrophic cardiovascular events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Louie, Alexander V.; Rodrigues, George, E-mail: george.rodrigues@lhsc.on.ca; Department of Epidemiology/Biostatistics, University of Western Ontario, London, ON
Purpose: To compare the quality-adjusted life expectancy and overall survival in patients with Stage I non-small-cell lung cancer (NSCLC) treated with either stereotactic body radiation therapy (SBRT) or surgery. Methods and Materials: We constructed a Markov model to describe health states after either SBRT or lobectomy for Stage I NSCLC for a 5-year time frame. We report various treatment strategy survival outcomes stratified by age, sex, and pack-year history of smoking, and compared these with an external outcome prediction tool (Adjuvant{exclamation_point} Online). Results: Overall survival, cancer-specific survival, and other causes of death as predicted by our model correlated closely withmore » those predicted by the external prediction tool. Overall survival at 5 years as predicted by baseline analysis of our model is in favor of surgery, with a benefit ranging from 2.2% to 3.0% for all cohorts. Mean quality-adjusted life expectancy ranged from 3.28 to 3.78 years after surgery and from 3.35 to 3.87 years for SBRT. The utility threshold for preferring SBRT over surgery was 0.90. Outcomes were sensitive to quality of life, the proportion of local and regional recurrences treated with standard vs. palliative treatments, and the surgery- and SBRT-related mortalities. Conclusions: The role of SBRT in the medically operable patient is yet to be defined. Our model indicates that SBRT may offer comparable overall survival and quality-adjusted life expectancy as compared with surgical resection. Well-powered prospective studies comparing surgery vs. SBRT in early-stage lung cancer are warranted to further investigate the relative survival, quality of life, and cost characteristics of both treatment paradigms.« less
Lim, Eun-A; Lee, Haeyoung; Bae, Eunmi; Lim, Jaeok; Shin, Young Kee; Choi, Sang-Eun
2016-01-01
As targeted therapy becomes increasingly important, diagnostic techniques for identifying targeted biomarkers have also become an emerging issue. The study aims to evaluate the cost-effectiveness of treating patients as guided by epidermal growth factor receptor (EGFR) mutation status compared with a no-testing strategy that is the current clinical practice in South Korea. A cost-utility analysis was conducted to compare an EGFR mutation testing strategy with a no-testing strategy from the Korean healthcare payer's perspective. The study population consisted of patients with stage 3b and 4 lung adenocarcinoma. A decision tree model was employed to select the appropriate treatment regimen according to the results of EGFR mutation testing and a Markov model was constructed to simulate disease progression of advanced non-small cell lung cancer. The length of a Markov cycle was one month, and the time horizon was five years (60 cycles). In the base case analysis, the testing strategy was a dominant option. Quality-adjusted life-years gained (QALYs) were 0.556 and 0.635, and total costs were $23,952 USD and $23,334 USD in the no-testing and testing strategy respectively. The sensitivity analyses showed overall robust results. The incremental cost-effectiveness ratios (ICERs) increased when the number of patients to be treated with erlotinib increased, due to the high cost of erlotinib. Treating advanced adenocarcinoma based on EGFR mutation status has beneficial effects and saves the cost compared to no testing strategy in South Korea. However, the cost-effectiveness of EGFR mutation testing was heavily affected by the cost-effectiveness of the targeted therapy.
Chauvin, C; Clement, C; Bruneau, M; Pommeret, D
2007-07-16
This article describes the use of Markov chains to explore the time-patterns of antimicrobial exposure in broiler poultry. The transition in antimicrobial exposure status (exposed/not exposed to an antimicrobial, with a distinction between exposures to the different antimicrobial classes) in extensive data collected in broiler chicken flocks from November 2003 onwards, was investigated. All Markov chains were first-order chains. Mortality rate, geographical location and slaughter semester were sources of heterogeneity between transition matrices. Transitions towards a 'no antimicrobial' exposure state were highly predominant, whatever the initial state. From a 'no antimicrobial' exposure state, the transition to beta-lactams was predominant among transitions to an antimicrobial exposure state. Transitions between antimicrobial classes were rare and variable. Switches between antimicrobial classes and repeats of a particular class were both observed. Application of Markov chains analysis to the database of the nation-wide antimicrobial resistance monitoring programme pointed out that transition probabilities between antimicrobial exposure states increased with the number of resistances in Escherichia coli strains.
Li, Yan; Dong, Zigang
2016-06-27
Recently, the Markov state model has been applied for kinetic analysis of molecular dynamics simulations. However, discretization of the conformational space remains a primary challenge in model building, and it is not clear how the space decomposition by distinct clustering strategies exerts influence on the model output. In this work, different clustering algorithms are employed to partition the conformational space sampled in opening and closing of fatty acid binding protein 4 as well as inactivation and activation of the epidermal growth factor receptor. Various classifications are achieved, and Markov models are set up accordingly. On the basis of the models, the total net flux and transition rate are calculated between two distinct states. Our results indicate that geometric and kinetic clustering perform equally well. The construction and outcome of Markov models are heavily dependent on the data traits. Compared to other methods, a combination of Bayesian and hierarchical clustering is feasible in identification of metastable states.
Detection of digital FSK using a phase-locked loop
NASA Technical Reports Server (NTRS)
Lindsey, W. C.; Simon, M. K.
1975-01-01
A theory is presented for the design of a digital FSK receiver which employs a phase-locked loop to set up the desired matched filter as the arriving signal frequency switches. The developed mathematical model makes it possible to establish the error probability performance of systems which employ a class of digital FM modulations. The noise mechanism which accounts for decision errors is modeled on the basis of the Meyr distribution and renewal Markov process theory.
Multi-level Operational C2 Holonic Reference Architecture Modeling for MHQ with MOC
2009-06-01
x), x(k), uj(k)) is defined as the task success probability, based on the asset allocation and task execution activities at the tactical level...on outcomes of asset- task allocation at the tactical level. We employ semi-Markov decision process (SMDP) approach to decide on missions to be...AGA) graph for addressing the mission monitoring/ planning issues related to task sequencing and asset allocation at the OLC-TLC layer (coordination
A Markov Decision Process Model for the Optimal Dispatch of Military Medical Evacuation Assets
2014-03-27
further background on MEDEVAC and provides a review of pertinent literature . Section 3 provides a de- scription of the problem for which we develop our...best medical evacuation system possible, for those who follow in your footsteps . Special thanks goes to my wife and two children for their...order to generate the computational results necessary to make this paper a success. Lastly, I would like to thank the US Army Medical Evacuation
Naval Research Logistics Quarterly. Volume 28, Number 4,
1981-12-01
Fan [31 and an observation by Meijerink and van der Vorst [181 guarantee that after pivoting on any diagonal element of a diagonally dominant M- matrix...Science, 3, 255-269 (1957). 1181 Meijerink, J. and H. Van der Vorst, "An Iterative Solution Method for Linear Systems of which the Coefficient Matrix Is a...Hee, K., A. Hordijk and J. Van der Wal, "Successive Approximations for Convergent Dynamic Programming," in Markov Decision Theory, H. Tijms and J
Sterling, Timothy R; Lehmann, Harold P; Frieden, Thomas R
2003-03-15
This study sought to determine the impact of the World Health Organization's directly observed treatment strategy (DOTS) compared with that of DOTS-plus on tuberculosis deaths, mainly in the developing world. Decision analysis with Monte Carlo simulation of a Markov decision tree. People with smear positive pulmonary tuberculosis. Analyses modelled different levels of programme effectiveness of DOTS and DOTS-plus, and high (10%) and intermediate (3%) proportions of primary multidrug resistant tuberculosis, while accounting for exogenous reinfection. The cumulative number of tuberculosis deaths per 100 000 population over 10 years. The model predicted that under DOTS, 276 people would die from tuberculosis (24 multidrug resistant and 252 not multidrug resistant) over 10 years under optimal implementation in an area with 3% primary multidrug resistant tuberculosis. Optimal implementation of DOTS-plus would result in four (1.5%) fewer deaths. If implementation of DOTS-plus were to result in a decrease of just 5% in the effectiveness of DOTS, 16% more people would die with tuberculosis than under DOTS alone. In an area with 10% primary multidrug resistant tuberculosis, 10% fewer deaths would occur under optimal DOTS-plus than under optimal DOTS, but 16% more deaths would occur if implementation of DOTS-plus were to result in a 5% decrease in the effectiveness of DOTS CONCLUSIONS: Under optimal implementation, fewer tuberculosis deaths would occur under DOTS-plus than under DOTS. If, however, implementation of DOTS-plus were associated with even minimal decreases in the effectiveness of treatment, substantially more patients would die than under DOTS.
Phisalprapa, Pochamana; Supakankunti, Siripen; Charatcharoenwitthaya, Phunchai; Apisarnthanarak, Piyaporn; Charoensak, Aphinya; Washirasaksiri, Chaiwat; Srivanichakorn, Weerachai; Chaiyakunapruk, Nathorn
2017-04-01
Nonalcoholic fatty liver disease (NAFLD) can be diagnosed early by noninvasive ultrasonography; however, the cost-effectiveness of ultrasonography screening with intensive weight reduction program in metabolic syndrome patients is not clear. This study aims to estimate economic and clinical outcomes of ultrasonography in Thailand. Cost-effectiveness analysis used decision tree and Markov models to estimate lifetime costs and health benefits from societal perspective, based on a cohort of 509 metabolic syndrome patients in Thailand. Data were obtained from published literatures and Thai database. Results were reported as incremental cost-effectiveness ratios (ICERs) in 2014 US dollars (USD) per quality-adjusted life year (QALY) gained with discount rate of 3%. Sensitivity analyses were performed to assess the influence of parameter uncertainty on the results. The ICER of ultrasonography screening of 50-year-old metabolic syndrome patients with intensive weight reduction program was 958 USD/QALY gained when compared with no screening. The probability of being cost-effective was 67% using willingness-to-pay threshold in Thailand (4848 USD/QALY gained). Screening before 45 years was cost saving while screening at 45 to 64 years was cost-effective. For patients with metabolic syndromes, ultrasonography screening for NAFLD with intensive weight reduction program is a cost-effective program in Thailand. Study can be used as part of evidence-informed decision making. Findings could contribute to changes of NAFLD diagnosis practice in settings where economic evidence is used as part of decision-making process. Furthermore, study design, model structure, and input parameters could also be used for future research addressing similar questions.
Gomez, Jorge Alberto; Lepetic, Alejandro; Demarteau, Nadia
2014-11-26
In Chile, significant reductions in cervical cancer incidence and mortality have been observed due to implementation of a well-organized screening program. However, it has been suggested that the inclusion of human papillomavirus (HPV) vaccination for young adolescent women may be the best prospect to further reduce the burden of cervical cancer. This cost-effectiveness study comparing two available HPV vaccines in Chile was performed to support decision making on the implementation of universal HPV vaccination. The present analysis used an existing static Markov model to assess the effect of screening and vaccination. This analysis includes the epidemiology of low-risk HPV types allowing for the comparison between the two vaccines (HPV-16/18 AS04-adjuvanted vaccine and the HPV-6/11/16/18 vaccine), latest cross-protection data on HPV vaccines, treatment costs for cervical cancer, vaccine costs and 6% discounting per the health economic guideline for Chile. Projected incremental cost-utility ratio (ICUR) and incremental cost-effectiveness ratio (ICERs) for the HPV-16/18 AS04-adjuvanted vaccine was 116 United States (US) dollars per quality-adjusted life years (QALY) gained or 147 US dollars per life-years (LY) saved, while the projected ICUR/ICER for the HPV-6/11/16/18 vaccine was 541 US dollars per QALY gained or 726 US dollars per LY saved. Introduction of any HPV vaccine to the present cervical cancer prevention program of Chile is estimated to be highly cost-effective (below 1X gross domestic product [GDP] per capita, 14278 US dollars). In Chile, the addition of HPV-16/18 AS04-adjuvanted vaccine to the existing screening program dominated the addition of HPV-6/11/16/18 vaccine. In the probabilistic sensitivity analysis results show that the HPV-16/18 AS04-adjuvanted vaccine is expected to be dominant and cost-saving in 69.3% and 77.6% of the replicates respectively. The findings indicate that the addition of any HPV vaccine to the current cervical screening program of Chile will be advantageous. However, this cost-effectiveness model shows that the HPV-16/18 AS04-adjuvanted vaccine dominated the HPV-6/11/16/18 vaccine. Beyond the context of Chile, the data from this modelling exercise may support healthcare policy and decision-making pertaining to introduction of HPV vaccination in similar resource settings in the region.
NASA Technical Reports Server (NTRS)
Smith, R. M.
1991-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.
Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E
2011-01-01
Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Impact of DOTS expansion on tuberculosis related outcomes and costs in Haiti.
Jacquet, Vary; Morose, Willy; Schwartzman, Kevin; Oxlade, Olivia; Barr, Graham; Grimard, Franque; Menzies, Dick
2006-08-15
Implementation of the World Health Organization's DOTS strategy (Directly Observed Treatment Short-course therapy) can result in significant reduction in tuberculosis incidence. We estimated potential costs and benefits of DOTS expansion in Haiti from the government, and societal perspectives. Using decision analysis incorporating multiple Markov processes (Markov modelling), we compared expected tuberculosis morbidity, mortality and costs in Haiti with DOTS expansion to reach all of the country, and achieve WHO benchmarks, or if the current situation did not change. Probabilities of tuberculosis related outcomes were derived from the published literature. Government health expenditures, patient and family costs were measured in direct surveys in Haiti and expressed in 2003 US$. Starting in 2003, DOTS expansion in Haiti is anticipated to cost $4.2 million and result in 63,080 fewer tuberculosis cases, 53,120 fewer tuberculosis deaths, and net societal savings of $131 million, over 20 years. Current government spending for tuberculosis is high, relative to the per capita income, and would be only slightly lower with DOTS. Societal savings would begin within 4 years, and would be substantial in all scenarios considered, including higher HIV seroprevalence or drug resistance, unchanged incidence following DOTS expansion, or doubling of initial and ongoing costs for DOTS expansion. A modest investment for DOTS expansion in Haiti would provide considerable humanitarian benefit by reducing tuberculosis-related morbidity, mortality and costs for patients and their families. These benefits, together with projected minimal Haitian government savings, argue strongly for donor support for DOTS expansion.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1995-01-01
This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.
SURE reliability analysis: Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; White, Allan L.
1988-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Smith, Wade P; Kim, Minsun; Holdsworth, Clay; Liao, Jay; Phillips, Mark H
2016-03-11
To build a new treatment planning approach that extends beyond radiation transport and IMRT optimization by modeling the radiation therapy process and prognostic indicators for more outcome-focused decision making. An in-house treatment planning system was modified to include multiobjective inverse planning, a probabilistic outcome model, and a multi-attribute decision aid. A genetic algorithm generated a set of plans embodying trade-offs between the separate objectives. An influence diagram network modeled the radiation therapy process of prostate cancer using expert opinion, results of clinical trials, and published research. A Markov model calculated a quality adjusted life expectancy (QALE), which was the endpoint for ranking plans. The Multiobjective Evolutionary Algorithm (MOEA) was designed to produce an approximation of the Pareto Front representing optimal tradeoffs for IMRT plans. Prognostic information from the dosimetrics of the plans, and from patient-specific clinical variables were combined by the influence diagram. QALEs were calculated for each plan for each set of patient characteristics. Sensitivity analyses were conducted to explore changes in outcomes for variations in patient characteristics and dosimetric variables. The model calculated life expectancies that were in agreement with an independent clinical study. The radiation therapy model proposed has integrated a number of different physical, biological and clinical models into a more comprehensive model. It illustrates a number of the critical aspects of treatment planning that can be improved and represents a more detailed description of the therapy process. A Markov model was implemented to provide a stronger connection between dosimetric variables and clinical outcomes and could provide a practical, quantitative method for making difficult clinical decisions.
SAR Image Change Detection Based on Fuzzy Markov Random Field Model
NASA Astrophysics Data System (ADS)
Zhao, J.; Huang, G.; Zhao, Z.
2018-04-01
Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.
Multi-Agent Patrolling under Uncertainty and Threats.
Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D
2015-01-01
We investigate a multi-agent patrolling problem where information is distributed alongside threats in environments with uncertainties. Specifically, the information and threat at each location are independently modelled as multi-state Markov chains, whose states are not observed until the location is visited by an agent. While agents will obtain information at a location, they may also suffer damage from the threat at that location. Therefore, the goal of the agents is to gather as much information as possible while mitigating the damage incurred. To address this challenge, we formulate the single-agent patrolling problem as a Partially Observable Markov Decision Process (POMDP) and propose a computationally efficient algorithm to solve this model. Building upon this, to compute patrols for multiple agents, the single-agent algorithm is extended for each agent with the aim of maximising its marginal contribution to the team. We empirically evaluate our algorithm on problems of multi-agent patrolling and show that it outperforms a baseline algorithm up to 44% for 10 agents and by 21% for 15 agents in large domains.
Allocating conservation resources between areas where persistence of a species is uncertain.
McDonald-Madden, Eve; Chadès, Iadine; McCarthy, Michael A; Linkie, Matthew; Possingham, Hugh P
2011-04-01
Research on the allocation of resources to manage threatened species typically assumes that the state of the system is completely observable; for example whether a species is present or not. The majority of this research has converged on modeling problems as Markov decision processes (MDP), which give an optimal strategy driven by the current state of the system being managed. However, the presence of threatened species in an area can be uncertain. Typically, resource allocation among multiple conservation areas has been based on the biggest expected benefit (return on investment) but fails to incorporate the risk of imperfect detection. We provide the first decision-making framework for confronting the trade-off between information and return on investment, and we illustrate the approach for populations of the Sumatran tiger (Panthera tigris sumatrae) in Kerinci Seblat National Park. The problem is posed as a partially observable Markov decision process (POMDP), which extends MDP to incorporate incomplete detection and allows decisions based on our confidence in particular states. POMDP has previously been used for making optimal management decisions for a single population of a threatened species. We extend this work by investigating two populations, enabling us to explore the importance of variation in expected return on investment between populations on how we should act. We compare the performance of optimal strategies derived assuming complete (MDP) and incomplete (POMDP) observability. We find that uncertainty about the presence of a species affects how we should act. Further, we show that assuming full knowledge of a species presence will deliver poorer strategic outcomes than if uncertainty about a species status is explicitly considered. MDP solutions perform up to 90% worse than the POMDP for highly cryptic species, and they only converge in performance when we are certain of observing the species during management: an unlikely scenario for many threatened species. This study illustrates an approach to allocating limited resources to threatened species where the conservation status of the species in different areas is uncertain. The results highlight the importance of including partial observability in future models of optimal species management when the species of concern is cryptic in nature.
Exact goodness-of-fit tests for Markov chains.
Besag, J; Mondal, D
2013-06-01
Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.
Markov switching of the electricity supply curve and power prices dynamics
NASA Astrophysics Data System (ADS)
Mari, Carlo; Cananà, Lucianna
2012-02-01
Regime-switching models seem to well capture the main features of power prices behavior in deregulated markets. In a recent paper, we have proposed an equilibrium methodology to derive electricity prices dynamics from the interplay between supply and demand in a stochastic environment. In particular, assuming that the supply function is described by a power law where the exponent is a two-state strictly positive Markov process, we derived a regime switching dynamics of power prices in which regime switches are induced by transitions between Markov states. In this paper, we provide a dynamical model to describe the random behavior of power prices where the only non-Brownian component of the motion is endogenously introduced by Markov transitions in the exponent of the electricity supply curve. In this context, the stochastic process driving the switching mechanism becomes observable, and we will show that the non-Brownian component of the dynamics induced by transitions from Markov states is responsible for jumps and spikes of very high magnitude. The empirical analysis performed on three Australian markets confirms that the proposed approach seems quite flexible and capable of incorporating the main features of power prices time-series, thus reproducing the first four moments of log-returns empirical distributions in a satisfactory way.
Hierarchical structure for audio-video based semantic classification of sports video sequences
NASA Astrophysics Data System (ADS)
Kolekar, M. H.; Sengupta, S.
2005-07-01
A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.
Exact and Approximate Probabilistic Symbolic Execution
NASA Technical Reports Server (NTRS)
Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem
2014-01-01
Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.
Cost-effectiveness of alternative outpatient pelvic inflammatory disease treatment strategies.
Smith, Kenneth J; Ness, Roberta B; Wiesenfeld, Harold C; Roberts, Mark S
2007-12-01
Effectiveness differences between outpatient pelvic inflammatory disease (PID) treatment regimens are uncertain, but significant differences in cost exist. To examine the influence of antibiotic costs on PID therapy cost-effectiveness. The authors used a Markov decision model to estimate the cost-effectiveness of recommended antibiotic regimens for PID and performed a value of information analysis to guide future research. Antibiotic costs vary between USD 43 and USD188. Pairwise comparisons, assuming a hypothetical 1% relative risk reduction in PID complications with the more expensive regimen, showed economically reasonable cost-effectiveness ratios. Value of information and sample size considerations support further investigation to detect 10% PID complication rate differences between regimens with >or=USD 50 cost differences. Within the cost range of recommended regimens, use of more expensive antibiotics would be economically reasonable if relatively small decreases in PID complication rates exist. Further investigation of effectiveness differences between regimens is needed.
Feingold, B; Webber, S A; Bryce, C L; Park, S Y; Tomko, H E; Comer, D M; Mahle, W T; Smith, K J
2015-02-01
Allosensitized children who require a negative prospective crossmatch have a high risk of death awaiting heart transplantation. Accepting the first suitable organ offer, regardless of the possibility of a positive crossmatch, would improve waitlist outcomes but it is unclear whether it would result in improved survival at all times after listing, including posttransplant. We created a Markov decision model to compare survival after listing with a requirement for a negative prospective donor cell crossmatch (WAIT) versus acceptance of the first suitable offer (TAKE). Model parameters were derived from registry data on status 1A (highest urgency) pediatric heart transplant listings. We assumed no possibility of a positive crossmatch in the WAIT strategy and a base-case probability of a positive crossmatch in the TAKE strategy of 47%, as estimated from cohort data. Under base-case assumptions, TAKE showed an incremental survival benefit of 1.4 years over WAIT. In multiple sensitivity analyses, including variation of the probability of a positive crossmatch from 10% to 100%, TAKE was consistently favored. While model input data were less well suited to comparing survival when awaiting transplantation across a negative virtual crossmatch, our analysis suggests that taking the first suitable organ offer under these circumstances is also favored. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.
The management of aldosterone-producing adrenal adenomas--does adrenalectomy increase costs?
Reimel, Bethann; Zanocco, Kyle; Russo, Mark J; Zarnegar, Rasa; Clark, Orlo H; Allendorf, John D; Chabot, John A; Duh, Quan-Yang; Lee, James A; Sturgeon, Cord
2010-12-01
Most experts agree that primary hyperaldosteronism (PHA) caused by an aldosterone-producing adenoma (APA) is best treated by adrenalectomy. From a public health standpoint, the cost of treatment must be considered. We sought to compare the current guideline-based (surgical) strategy with universal pharmacologic management to determine the optimal strategy from a cost perspective. A decision analysis was performed using a Markov state transition model comparing the strategies for PHA treatment. Pharmacologic management for all patients with PHA was compared with a strategy of screening for and resecting an aldosterone-producing adenoma. Success rates were determined for treatment outcomes based on a literature review. Medicare reimbursement rates were calculated to estimate costs from a third-party payer perspective. Screening for and resecting APAs was the least costly strategy in this model. For a reference patient with 41 remaining years of life, the discounted expected cost of the surgical strategy was $27,821. The discounted expected cost of the medical strategy was $34,691. The cost of adrenalectomy would have to increase by 156% to $22,525 from $8,784 for universal pharmacologic therapy to be less costly. Screening for APA is more costly if fewer than 9.6% of PHA patients have resectable APA. Resection of APAs was the least costly treatment strategy in this decision analysis model. Copyright © 2010 Mosby, Inc. All rights reserved.
Dynamic Routing of Aircraft in the Presence of Adverse Weather Using a POMDP Framework
NASA Technical Reports Server (NTRS)
Balaban, Edward; Roychoudhury, Indranil; Spirkovska, Lilly; Sankararaman, Shankar; Kulkarni, Chetan; Arnon, Tomer
2017-01-01
Each year weather-related airline delays result in hundreds of millions of dollars in additional fuel burn, maintenance, and lost revenue, not to mention passenger inconvenience. The current approaches for aircraft route planning in the presence of adverse weather still mainly rely on deterministic methods. In contrast, this work aims to deal with the problem using a Partially Observable Markov Decision Processes (POMDPs) framework, which allows for reasoning over uncertainty (including uncertainty in weather evolution over time) and results in solutions that are more robust to disruptions. The POMDP-based decision support system is demonstrated on several scenarios involving convective weather cells and is benchmarked against a deterministic planning system with functionality similar to those currently in use or under development.
An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks
Salim, Shelly; Moh, Sangman
2016-01-01
A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead. PMID:27376290
An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks.
Salim, Shelly; Moh, Sangman
2016-06-30
A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead.
Akama-Garren, Elliot H.; Bianchi, Matt T.; Leveroni, Catherine; Cole, Andrew J.; Cash, Sydney S.; Westover, M. Brandon
2016-01-01
SUMMARY Objectives Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. Methods We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. Results For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Significance Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. PMID:25244498
NASA Astrophysics Data System (ADS)
Glaubius, J.; Maerker, M.
2016-12-01
Anthropogenic landforms, such as mines and agricultural terraces, are impacted by both geomorphic and social processes at varying intensities through time. In the case of agricultural terraces, decisions regarding terrace maintenance are intertwined with land use, such as when terraced fields are abandoned. Furthermore, terrace maintenance and land use decisions, either jointly or separately, may be in response to geomorphic processes, as well as geomorphic feedbacks. Previous studies of these complex geomorphic systems considered agricultural terraces as static features or analyzed only the geomorphic response to landowner decisions. Such research is appropriate for short-term or binary landscape scenarios (e.g. the impact of maintained vs. abandoned terraces), but the complexities inherent in these socio-natural systems requires an approach that includes both social and geomorphic processes. This project analyzes feedbacks and emergent properties in terraced systems by implementing a coupled landscape evolution model (LEM) and agent-based model (ABM) using the Landlab and Mesa modeling libraries. In the ABM portion of the model, agricultural terraces are conceptualized using a life-cycle stages schema and implemented using Markov Decision Processes to simulate the changing geomorphic impact of terracing based on human decisions. This paper examines the applicability of this approach by comparing results from a LEM-only model against the coupled LEM-ABM model for a terraced region. Model results are compared by quantify and spatial patterning of sediment transport. This approach fully captures long-term landscape evolution of terraced terrain that is otherwise lost when the life-cycle of terraces is not considered. The coupled LEM-ABM approach balances both environmental and social processes so that the socio-natural feedbacks in such anthropogenic systems can be disentangled.
Akama-Garren, Elliot H; Bianchi, Matt T; Leveroni, Catherine; Cole, Andrew J; Cash, Sydney S; Westover, M Brandon
2014-11-01
Anterior temporal lobectomy is curative for many patients with disabling medically refractory temporal lobe epilepsy, but carries an inherent risk of disabling verbal memory loss. Although accurate prediction of iatrogenic memory loss is becoming increasingly possible, it remains unclear how much weight such predictions should have in surgical decision making. Here we aim to create a framework that facilitates a systematic and integrated assessment of the relative risks and benefits of surgery versus medical management for patients with left temporal lobe epilepsy. We constructed a Markov decision model to evaluate the probabilistic outcomes and associated health utilities associated with choosing to undergo a left anterior temporal lobectomy versus continuing with medical management for patients with medically refractory left temporal lobe epilepsy. Three base-cases were considered, representing a spectrum of surgical candidates encountered in practice, with varying degrees of epilepsy-related disability and potential for decreased quality of life in response to post-surgical verbal memory deficits. For patients with moderately severe seizures and moderate risk of verbal memory loss, medical management was the preferred decision, with increased quality-adjusted life expectancy. However, the preferred choice was sensitive to clinically meaningful changes in several parameters, including quality of life impact of verbal memory decline, quality of life with seizures, mortality rate with medical management, probability of remission following surgery, and probability of remission with medical management. Our decision model suggests that for patients with left temporal lobe epilepsy, quantitative assessment of risk and benefit should guide recommendation of therapy. In particular, risk for and potential impact of verbal memory decline should be carefully weighed against the degree of disability conferred by continued seizures on a patient-by-patient basis. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.
Moro, Marilyn; Goparaju, Balaji; Castillo, Jelina; Alameddine, Yvonne; Bianchi, Matt T
2016-01-01
Introduction Periodic limb movements of sleep (PLMS) may increase cardiovascular and cerebrovascular morbidity. However, most people with PLMS are either asymptomatic or have nonspecific symptoms. Therefore, predicting elevated PLMS in the absence of restless legs syndrome remains an important clinical challenge. Methods We undertook a retrospective analysis of demographic data, subjective symptoms, and objective polysomnography (PSG) findings in a clinical cohort with or without obstructive sleep apnea (OSA) from our laboratory (n=443 with OSA, n=209 without OSA). Correlation analysis and regression modeling were performed to determine predictors of periodic limb movement index (PLMI). Markov decision analysis with TreeAge software compared strategies to detect PLMS: in-laboratory PSG, at-home testing, and a clinical prediction tool based on the regression analysis. Results Elevated PLMI values (>15 per hour) were observed in >25% of patients. PLMI values in No-OSA patients correlated with age, sex, self-reported nocturnal leg jerks, restless legs syndrome symptoms, and hypertension. In OSA patients, PLMI correlated only with age and self-reported psychiatric medications. Regression models indicated only a modest predictive value of demographics, symptoms, and clinical history. Decision modeling suggests that at-home testing is favored as the pretest probability of PLMS increases, given plausible assumptions regarding PLMS morbidity, costs, and assumed benefits of pharmacological therapy. Conclusion Although elevated PLMI values were commonly observed, routinely acquired clinical information had only weak predictive utility. As the clinical importance of elevated PLMI continues to evolve, it is likely that objective measures such as PSG or at-home PLMS monitors will prove increasingly important for clinical and research endeavors. PMID:27540316
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
A Novel Methodology for Charging Station Deployment
NASA Astrophysics Data System (ADS)
Sun, Zhonghao; Zhao, Yunwei; He, Yueying; Li, Mingzhe
2018-02-01
Lack of charging stations has been a main obstacle to the promotion of electric vehicles. This paper studies deploying charging stations in traffic networks considering grid constraints to balance the charging demand and grid stability. First, we propose a statistical model for charging demand. Then we combine the charging demand model with power grid constraints and give the formulation of the charging station deployment problem. Finally, we propose a theoretical solution for the problem by transforming it to a Markov Decision Process.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
Puttarajappa, Chethan; Wijkstrom, Martin; Ganoza, Armando; Lopez, Roberto; Tevar, Amit
2018-01-01
Background Recent studies have reported a significant decrease in wound problems and hospital stay in obese patients undergoing renal transplantation by robotic-assisted minimally invasive techniques with no difference in graft function. Objective Due to the lack of cost-benefit studies on the use of robotic-assisted renal transplantation versus open surgical procedure, the primary aim of our study is to develop a Markov model to analyze the cost-benefit of robotic surgery versus open traditional surgery in obese patients in need of a renal transplant. Methods Electronic searches will be conducted to identify studies comparing open renal transplantation versus robotic-assisted renal transplantation. Costs associated with the two surgical techniques will incorporate the expenses of the resources used for the operations. A decision analysis model will be developed to simulate a randomized controlled trial comparing three interventional arms: (1) continuation of renal replacement therapy for patients who are considered non-suitable candidates for renal transplantation due to obesity, (2) transplant recipients undergoing open transplant surgery, and (3) transplant patients undergoing robotic-assisted renal transplantation. TreeAge Pro 2017 R1 TreeAge Software, Williamstown, MA, USA) will be used to create a Markov model and microsimulation will be used to compare costs and benefits for the two competing surgical interventions. Results The model will simulate a randomized controlled trial of adult obese patients affected by end-stage renal disease undergoing renal transplantation. The absorbing state of the model will be patients' death from any cause. By choosing death as the absorbing state, we will be able simulate the population of renal transplant recipients from the day of their randomization to transplant surgery or continuation on renal replacement therapy to their death and perform sensitivity analysis around patients' age at the time of randomization to determine if age is a critical variable for cost-benefit analysis or cost-effectiveness analysis comparing renal replacement therapy, robotic-assisted surgery or open renal transplant surgery. After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. Conclusions After running the model, one of the three competing strategies will result as the most cost-beneficial or cost-effective under common circumstances. To assess the robustness of the results of the model, a multivariable probabilistic sensitivity analysis will be performed by modifying the mean values and confidence intervals of key parameters with the main intent of assessing if the winning strategy is sensitive to rigorous and plausible variations of those values. PMID:29519780
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
A hierarchical approach to reliability modeling of fault-tolerant systems. M.S. Thesis
NASA Technical Reports Server (NTRS)
Gossman, W. E.
1986-01-01
A methodology for performing fault tolerant system reliability analysis is presented. The method decomposes a system into its subsystems, evaluates vent rates derived from the subsystem's conditional state probability vector and incorporates those results into a hierarchical Markov model of the system. This is done in a manner that addresses failure sequence dependence associated with the system's redundancy management strategy. The method is derived for application to a specific system definition. Results are presented that compare the hierarchical model's unreliability prediction to that of a more complicated tandard Markov model of the system. The results for the example given indicate that the hierarchical method predicts system unreliability to a desirable level of accuracy while achieving significant computational savings relative to component level Markov model of the system.
Analysis of single-molecule fluorescence spectroscopic data with a Markov-modulated Poisson process.
Jäger, Mark; Kiel, Alexander; Herten, Dirk-Peter; Hamprecht, Fred A
2009-10-05
We present a photon-by-photon analysis framework for the evaluation of data from single-molecule fluorescence spectroscopy (SMFS) experiments using a Markov-modulated Poisson process (MMPP). A MMPP combines a discrete (and hidden) Markov process with an additional Poisson process reflecting the observation of individual photons. The algorithmic framework is used to automatically analyze the dynamics of the complex formation and dissociation of Cu2+ ions with the bidentate ligand 2,2'-bipyridine-4,4'dicarboxylic acid in aqueous media. The process of association and dissociation of Cu2+ ions is monitored with SMFS. The dcbpy-DNA conjugate can exist in two or more distinct states which influence the photon emission rates. The advantage of a photon-by-photon analysis is that no information is lost in preprocessing steps. Different model complexities are investigated in order to best describe the recorded data and to determine transition rates on a photon-by-photon basis. The main strength of the method is that it allows to detect intermittent phenomena which are masked by binning and that are difficult to find using correlation techniques when they are short-lived.
Verry, H; Lord, S J; Martin, A; Gill, G; Lee, C K; Howard, K; Wetzig, N; Simes, J
2012-03-13
Sentinel lymph node biopsy (SLNB) is less invasive than axillary lymph node dissection (ALND) for staging early breast cancer, and has a lower risk of arm lymphoedema and similar rates of locoregional recurrence up to 8 years. This study estimates the longer-term effectiveness and cost-effectiveness of SLNB. A Markov decision model was developed to estimate the incremental quality-adjusted life years (QALYs) and costs of an SLNB-based staging and management strategy compared with ALND over 20 years' follow-up. The probability and quality-of-life weighting (utility) of outcomes were estimated from published data and population statistics. Costs were estimated from the perspective of the Australian health care system. The model was used to identify key factors affecting treatment decisions. The SLNB was more effective and less costly than the ALND over 20 years, with 8 QALYs gained and $883,000 saved per 1000 patients. The SLNB was less effective when: SLNB false negative (FN) rate >13%; 5-year incidence of axillary recurrence after an SLNB FN>19%; risk of an SLNB-positive result >48%; lymphoedema prevalence after ALND <14%; or lymphoedema utility decrement <0.012. The long-term advantage of SLNB over ALND was modest and sensitive to variations in key assumptions, indicating a need for reliable information on lymphoedema incidence and disutility following SLNB. In addition to awaiting longer-term trial data, risk models to better identify patients at high risk of axillary metastasis will be valuable to inform decision-making.
On Markov parameters in system identification
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.
The SURE Reliability Analysis Program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Engin, Ozge; Sayar, Mehmet; Erman, Burak
2009-01-13
Relative contributions of local and non-local interactions to the unfolded conformations of peptides are examined by using the rotational isomeric states model which is a Markov model based on pairwise interactions of torsion angles. The isomeric states of a residue are well described by the Ramachandran map of backbone torsion angles. The statistical weight matrices for the states are determined by molecular dynamics simulations applied to monopeptides and dipeptides. Conformational properties of tripeptides formed from combinations of alanine, valine, tyrosine and tryptophan are investigated based on the Markov model. Comparison with molecular dynamics simulation results on these tripeptides identifies the sequence-distant long-range interactions that are missing in the Markov model. These are essentially the hydrogen bond and hydrophobic interactions that are obtained between the first and the third residue of a tripeptide. A systematic correction is proposed for incorporating these long-range interactions into the rotational isomeric states model. Preliminary results suggest that the Markov assumption can be improved significantly by renormalizing the statistical weight matrices to include the effects of the long-range correlations.
The Markov process admits a consistent steady-state thermodynamic formalism
NASA Astrophysics Data System (ADS)
Peng, Liangrong; Zhu, Yi; Hong, Liu
2018-01-01
The search for a unified formulation for describing various non-equilibrium processes is a central task of modern non-equilibrium thermodynamics. In this paper, a novel steady-state thermodynamic formalism was established for general Markov processes described by the Chapman-Kolmogorov equation. Furthermore, corresponding formalisms of steady-state thermodynamics for the master equation and Fokker-Planck equation could be rigorously derived in mathematics. To be concrete, we proved that (1) in the limit of continuous time, the steady-state thermodynamic formalism for the Chapman-Kolmogorov equation fully agrees with that for the master equation; (2) a similar one-to-one correspondence could be established rigorously between the master equation and Fokker-Planck equation in the limit of large system size; (3) when a Markov process is restrained to one-step jump, the steady-state thermodynamic formalism for the Fokker-Planck equation with discrete state variables also goes to that for master equations, as the discretization step gets smaller and smaller. Our analysis indicated that general Markov processes admit a unified and self-consistent non-equilibrium steady-state thermodynamic formalism, regardless of underlying detailed models.
NASA Astrophysics Data System (ADS)
Engin, Ozge; Sayar, Mehmet; Erman, Burak
2009-03-01
Relative contributions of local and non-local interactions to the unfolded conformations of peptides are examined by using the rotational isomeric states model which is a Markov model based on pairwise interactions of torsion angles. The isomeric states of a residue are well described by the Ramachandran map of backbone torsion angles. The statistical weight matrices for the states are determined by molecular dynamics simulations applied to monopeptides and dipeptides. Conformational properties of tripeptides formed from combinations of alanine, valine, tyrosine and tryptophan are investigated based on the Markov model. Comparison with molecular dynamics simulation results on these tripeptides identifies the sequence-distant long-range interactions that are missing in the Markov model. These are essentially the hydrogen bond and hydrophobic interactions that are obtained between the first and the third residue of a tripeptide. A systematic correction is proposed for incorporating these long-range interactions into the rotational isomeric states model. Preliminary results suggest that the Markov assumption can be improved significantly by renormalizing the statistical weight matrices to include the effects of the long-range correlations.
NASA Astrophysics Data System (ADS)
Abelard, Joshua Erold Robert
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Laramée, Philippe; Brodtkorb, Thor-Henrik; Rahhali, Nora; Knight, Chris; Barbosa, Carolina; François, Clément; Toumi, Mondher; Daeppen, Jean-Bernard; Rehm, Jürgen
2014-09-16
To determine whether nalmefene combined with psychosocial support is cost-effective compared with psychosocial support alone for reducing alcohol consumption in alcohol-dependent patients with high/very high drinking risk levels (DRLs) as defined by the WHO, and to evaluate the public health benefit of reducing harmful alcohol-attributable diseases, injuries and deaths. Decision modelling using Markov chains compared costs and effects over 5 years. The analysis was from the perspective of the National Health Service (NHS) in England and Wales. The model considered the licensed population for nalmefene, specifically adults with both alcohol dependence and high/very high DRLs, who do not require immediate detoxification and who continue to have high/very high DRLs after initial assessment. We modelled treatment effect using data from three clinical trials for nalmefene (ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941)). Baseline characteristics of the model population, treatment resource utilisation and utilities were from these trials. We estimated the number of alcohol-attributable events occurring at different levels of alcohol consumption based on published epidemiological risk-relation studies. Health-related costs were from UK sources. We measured incremental cost per quality-adjusted life year (QALY) gained and number of alcohol-attributable harmful events avoided. Nalmefene in combination with psychosocial support had an incremental cost-effectiveness ratio (ICER) of £5204 per QALY gained, and was therefore cost-effective at the £20,000 per QALY gained decision threshold. Sensitivity analyses showed that the conclusion was robust. Nalmefene plus psychosocial support led to the avoidance of 7179 alcohol-attributable diseases/injuries and 309 deaths per 100,000 patients compared to psychosocial support alone over the course of 5 years. Nalmefene can be seen as a cost-effective treatment for alcohol dependence, with substantial public health benefits. This cost-effectiveness analysis was developed based on data from three randomised clinical trials: ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Liu, Shan; Brandeau, Margaret L; Goldhaber-Fiebert, Jeremy D
2017-03-01
How long should a patient with a treatable chronic disease wait for more effective treatments before accepting the best available treatment? We develop a framework to guide optimal treatment decisions for a deteriorating chronic disease when treatment technologies are improving over time. We formulate an optimal stopping problem using a discrete-time, finite-horizon Markov decision process. The goal is to maximize a patient's quality-adjusted life expectancy. We derive structural properties of the model and analytically solve a three-period treatment decision problem. We illustrate the model with the example of treatment for chronic hepatitis C virus (HCV). Chronic HCV affects 3-4 million Americans and has been historically difficult to treat, but increasingly effective treatments have been commercialized in the past few years. We show that the optimal treatment decision is more likely to be to accept currently available treatment-despite expectations for future treatment improvement-for patients who have high-risk history, who are older, or who have more comorbidities. Insights from this study can guide HCV treatment decisions for individual patients. More broadly, our model can guide treatment decisions for curable chronic diseases by finding the optimal treatment policy for individual patients in a heterogeneous population.
Liu, Shan; Goldhaber-Fiebert, Jeremy D.; Brandeau, Margaret L.
2015-01-01
How long should a patient with a treatable chronic disease wait for more effective treatments before accepting the best available treatment? We develop a framework to guide optimal treatment decisions for a deteriorating chronic disease when treatment technologies are improving over time. We formulate an optimal stopping problem using a discrete-time, finite-horizon Markov decision process. The goal is to maximize a patient’s quality-adjusted life expectancy. We derive structural properties of the model and analytically solve a three-period treatment decision problem. We illustrate the model with the example of treatment for chronic hepatitis C virus (HCV). Chronic HCV affects 3–4 million Americans and has been historically difficult to treat, but increasingly effective treatments have been commercialized in the past few years. We show that the optimal treatment decision is more likely to be to accept currently available treatment—despite expectations for future treatment improvement—for patients who have high-risk history, who are older, or who have more comorbidities. Insights from this study can guide HCV treatment decisions for individual patients. More broadly, our model can guide treatment decisions for curable chronic diseases by finding the optimal treatment policy for individual patients in a heterogeneous population. PMID:26188961
Stochastic Calculus and Differential Equations for Physics and Finance
NASA Astrophysics Data System (ADS)
McCauley, Joseph L.
2013-02-01
1. Random variables and probability distributions; 2. Martingales, Markov, and nonstationarity; 3. Stochastic calculus; 4. Ito processes and Fokker-Planck equations; 5. Selfsimilar Ito processes; 6. Fractional Brownian motion; 7. Kolmogorov's PDEs and Chapman-Kolmogorov; 8. Non Markov Ito processes; 9. Black-Scholes, martingales, and Feynman-Katz; 10. Stochastic calculus with martingales; 11. Statistical physics and finance, a brief history of both; 12. Introduction to new financial economics; 13. Statistical ensembles and time series analysis; 14. Econometrics; 15. Semimartingales; References; Index.
Characterizing Quality Factor of Niobium Resonators Using a Markov Chain Monte Carlo Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basu Thakur, Ritoban; Tang, Qing Yang; McGeehan, Ryan
The next generation of radiation detectors in high precision Cosmology, Astronomy, and particle-astrophysics experiments will rely heavily on superconducting microwave resonators and kinetic inductance devices. Understanding the physics of energy loss in these devices, in particular at low temperatures and powers, is vital. We present a comprehensive analysis framework, using Markov Chain Monte Carlo methods, to characterize loss due to two-level system in concert with quasi-particle dynamics in thin-film Nb resonators in the GHz range.
Zipf exponent of trajectory distribution in the hidden Markov model
NASA Astrophysics Data System (ADS)
Bochkarev, V. V.; Lerner, E. Yu
2014-03-01
This paper is the first step of generalization of the previously obtained full classification of the asymptotic behavior of the probability for Markov chain trajectories for the case of hidden Markov models. The main goal is to study the power (Zipf) and nonpower asymptotics of the frequency list of trajectories of hidden Markov frequencys and to obtain explicit formulae for the exponent of the power asymptotics. We consider several simple classes of hidden Markov models. We prove that the asymptotics for a hidden Markov model and for the corresponding Markov chain can be essentially different.
When to stop managing or surveying cryptic threatened species
Chadès, Iadine; McDonald-Madden, Eve; McCarthy, Michael A.; Wintle, Brendan; Linkie, Matthew; Possingham, Hugh P.
2008-01-01
Threatened species become increasingly difficult to detect as their populations decline. Managers of such cryptic threatened species face several dilemmas: if they are not sure the species is present, should they continue to manage for that species or invest the limited resources in surveying? We find optimal solutions to this problem using a Partially Observable Markov Decision Process and rules of thumb derived from an analytical approximation. We discover that managing a protected area for a cryptic threatened species can be optimal even if we are not sure the species is present. The more threatened and valuable the species is, relative to the costs of management, the more likely we are to manage this species without determining its continued persistence by using surveys. If a species remains unseen, our belief in the persistence of the species declines to a point where the optimal strategy is to shift resources from saving the species to surveying for it. Finally, when surveys lead to a sufficiently low belief that the species is extant, we surrender resources to other conservation actions. We illustrate our findings with a case study using parameters based on the critically endangered Sumatran tiger (Panthera tigris sumatrae), and we generate rules of thumb on how to allocate conservation effort for any cryptic species. Using Partially Observable Markov Decision Processes in conservation science, we determine the conditions under which it is better to abandon management for that species because our belief that it continues to exist is too low. PMID:18779594
When to stop managing or surveying cryptic threatened species.
Chadès, Iadine; McDonald-Madden, Eve; McCarthy, Michael A; Wintle, Brendan; Linkie, Matthew; Possingham, Hugh P
2008-09-16
Threatened species become increasingly difficult to detect as their populations decline. Managers of such cryptic threatened species face several dilemmas: if they are not sure the species is present, should they continue to manage for that species or invest the limited resources in surveying? We find optimal solutions to this problem using a Partially Observable Markov Decision Process and rules of thumb derived from an analytical approximation. We discover that managing a protected area for a cryptic threatened species can be optimal even if we are not sure the species is present. The more threatened and valuable the species is, relative to the costs of management, the more likely we are to manage this species without determining its continued persistence by using surveys. If a species remains unseen, our belief in the persistence of the species declines to a point where the optimal strategy is to shift resources from saving the species to surveying for it. Finally, when surveys lead to a sufficiently low belief that the species is extant, we surrender resources to other conservation actions. We illustrate our findings with a case study using parameters based on the critically endangered Sumatran tiger (Panthera tigris sumatrae), and we generate rules of thumb on how to allocate conservation effort for any cryptic species. Using Partially Observable Markov Decision Processes in conservation science, we determine the conditions under which it is better to abandon management for that species because our belief that it continues to exist is too low.
Cain-Nielsen, Anne H; Moriarty, James P; Stewart, Elizabeth A; Borah, Bijan J
2014-09-01
To evaluate the cost-effectiveness of the following three treatments of uterine fibroids in a population of premenopausal women who wish to preserve their uteri: myomectomy, magnetic resonance-guided focused ultrasound (MRgFUS) and uterine artery embolization (UAE). A decision analytic Markov model was constructed. Cost-effectiveness was calculated in terms of US$ per quality-adjusted life year (QALY) over 5 years. Two types of costs were calculated: direct costs only, and the sum of direct and indirect (productivity) costs. Women in the hypothetical cohort were assessed for treatment type eligibility, were treated based on eligibility, and experienced adequate or inadequate symptom relief. Additional treatment (myomectomy) occurred for inadequate symptom relief or recurrence. Sensitivity analysis was conducted to evaluate uncertainty in the model parameters. In the base case, myomectomy, MRgFUS and UAE had the following combinations of mean cost and mean QALYs, respectively: US$15,459, 3.957; US$15,274, 3.953; and US$18,653, 3.943. When incorporating productivity costs, MRgFUS incurred a mean cost of US$21,232; myomectomy US$22,599; and UAE US$22,819. Using probabilistic sensitivity analysis (PSA) and excluding productivity costs, myomectomy was cost effective at almost every decision threshold. Using PSA and incorporating productivity costs, myomectomy was cost effective at decision thresholds above US$105,000/QALY; MRgFUS was cost effective between US$30,000 and US$105,000/QALY; and UAE was cost effective below US$30,000/QALY. Myomectomy, MRgFUS, and UAE were similarly effective in terms of QALYs gained. Depending on assumptions about costs and willingness to pay for additional QALYs, all three treatments can be deemed cost effective in a 5-year time frame.
Phisalprapa, Pochamana; Supakankunti, Siripen; Charatcharoenwitthaya, Phunchai; Apisarnthanarak, Piyaporn; Charoensak, Aphinya; Washirasaksiri, Chaiwat; Srivanichakorn, Weerachai; Chaiyakunapruk, Nathorn
2017-01-01
Abstract Background: Nonalcoholic fatty liver disease (NAFLD) can be diagnosed early by noninvasive ultrasonography; however, the cost-effectiveness of ultrasonography screening with intensive weight reduction program in metabolic syndrome patients is not clear. This study aims to estimate economic and clinical outcomes of ultrasonography in Thailand. Methods: Cost-effectiveness analysis used decision tree and Markov models to estimate lifetime costs and health benefits from societal perspective, based on a cohort of 509 metabolic syndrome patients in Thailand. Data were obtained from published literatures and Thai database. Results were reported as incremental cost-effectiveness ratios (ICERs) in 2014 US dollars (USD) per quality-adjusted life year (QALY) gained with discount rate of 3%. Sensitivity analyses were performed to assess the influence of parameter uncertainty on the results. Results: The ICER of ultrasonography screening of 50-year-old metabolic syndrome patients with intensive weight reduction program was 958 USD/QALY gained when compared with no screening. The probability of being cost-effective was 67% using willingness-to-pay threshold in Thailand (4848 USD/QALY gained). Screening before 45 years was cost saving while screening at 45 to 64 years was cost-effective. Conclusions: For patients with metabolic syndromes, ultrasonography screening for NAFLD with intensive weight reduction program is a cost-effective program in Thailand. Study can be used as part of evidence-informed decision making. Translational Impacts: Findings could contribute to changes of NAFLD diagnosis practice in settings where economic evidence is used as part of decision-making process. Furthermore, study design, model structure, and input parameters could also be used for future research addressing similar questions. PMID:28445256
A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.
Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C
2017-07-01
Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.
Clustered Numerical Data Analysis Using Markov Lie Monoid Based Networks
NASA Astrophysics Data System (ADS)
Johnson, Joseph
2016-03-01
We have designed and build an optimal numerical standardization algorithm that links numerical values with their associated units, error level, and defining metadata thus supporting automated data exchange and new levels of artificial intelligence (AI). The software manages all dimensional and error analysis and computational tracing. Tables of entities verses properties of these generalized numbers (called ``metanumbers'') support a transformation of each table into a network among the entities and another network among their properties where the network connection matrix is based upon a proximity metric between the two items. We previously proved that every network is isomorphic to the Lie algebra that generates continuous Markov transformations. We have also shown that the eigenvectors of these Markov matrices provide an agnostic clustering of the underlying patterns. We will present this methodology and show how our new work on conversion of scientific numerical data through this process can reveal underlying information clusters ordered by the eigenvalues. We will also show how the linking of clusters from different tables can be used to form a ``supernet'' of all numerical information supporting new initiatives in AI.
Markov Chain Monte Carlo in the Analysis of Single-Molecule Experimental Data
NASA Astrophysics Data System (ADS)
Kou, S. C.; Xie, X. Sunney; Liu, Jun S.
2003-11-01
This article provides a Bayesian analysis of the single-molecule fluorescence lifetime experiment designed to probe the conformational dynamics of a single DNA hairpin molecule. The DNA hairpin's conformational change is initially modeled as a two-state Markov chain, which is not observable and has to be indirectly inferred. The Brownian diffusion of the single molecule, in addition to the hidden Markov structure, further complicates the matter. We show that the analytical form of the likelihood function can be obtained in the simplest case and a Metropolis-Hastings algorithm can be designed to sample from the posterior distribution of the parameters of interest and to compute desired estiamtes. To cope with the molecular diffusion process and the potentially oscillating energy barrier between the two states of the DNA hairpin, we introduce a data augmentation technique to handle both the Brownian diffusion and the hidden Ornstein-Uhlenbeck process associated with the fluctuating energy barrier, and design a more sophisticated Metropolis-type algorithm. Our method not only increases the estimating resolution by several folds but also proves to be successful for model discrimination.
NASA Astrophysics Data System (ADS)
Baek, Sangkyu; Choi, Bong Dae
We investigate power consumption of a mobile station with the power saving class of type 1 in the IEEE 802.16e. We deal with stochastic behavior of mobile station during not only sleep mode period but also awake mode period with both downlink and uplink traffics. Our methods for investigating the power saving class of type 1 are to construct the embedded Markov chain and the semi-Markov chain generated by the embedded Markov chain. To see the effect of the sleep mode, we obtain the average power consumption of a mobile station and the mean queueing delay of a message. Numerical results show that the larger size of the sleep window makes the power consumption of a mobile station smaller and the queueing delay of a downlink message longer.
Markov chain Monte Carlo estimation of quantum states
NASA Astrophysics Data System (ADS)
Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman
2009-03-01
We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.
Markov Chain Monte Carlo Methods for Bayesian Data Analysis in Astronomy
NASA Astrophysics Data System (ADS)
Sharma, Sanjib
2017-08-01
Markov Chain Monte Carlo based Bayesian data analysis has now become the method of choice for analyzing and interpreting data in almost all disciplines of science. In astronomy, over the last decade, we have also seen a steady increase in the number of papers that employ Monte Carlo based Bayesian analysis. New, efficient Monte Carlo based methods are continuously being developed and explored. In this review, we first explain the basics of Bayesian theory and discuss how to set up data analysis problems within this framework. Next, we provide an overview of various Monte Carlo based methods for performing Bayesian data analysis. Finally, we discuss advanced ideas that enable us to tackle complex problems and thus hold great promise for the future. We also distribute downloadable computer software (available at https://github.com/sanjibs/bmcmc/ ) that implements some of the algorithms and examples discussed here.
A model of risk and mental state shifts during social interaction.
Hula, Andreas; Vilares, Iris; Lohrenz, Terry; Dayan, Peter; Montague, P Read
2018-02-01
Cooperation and competition between human players in repeated microeconomic games offer a window onto social phenomena such as the establishment, breakdown and repair of trust. However, although a suitable starting point for the quantitative analysis of such games exists, namely the Interactive Partially Observable Markov Decision Process (I-POMDP), computational considerations and structural limitations have limited its application, and left unmodelled critical features of behavior in a canonical trust task. Here, we provide the first analysis of two central phenomena: a form of social risk-aversion exhibited by the player who is in control of the interaction in the game; and irritation or anger, potentially exhibited by both players. Irritation arises when partners apparently defect, and it potentially causes a precipitate breakdown in cooperation. Failing to model one's partner's propensity for it leads to substantial economic inefficiency. We illustrate these behaviours using evidence drawn from the play of large cohorts of healthy volunteers and patients. We show that for both cohorts, a particular subtype of player is largely responsible for the breakdown of trust, a finding which sheds new light on borderline personality disorder.
A model of risk and mental state shifts during social interaction
Vilares, Iris
2018-01-01
Cooperation and competition between human players in repeated microeconomic games offer a window onto social phenomena such as the establishment, breakdown and repair of trust. However, although a suitable starting point for the quantitative analysis of such games exists, namely the Interactive Partially Observable Markov Decision Process (I-POMDP), computational considerations and structural limitations have limited its application, and left unmodelled critical features of behavior in a canonical trust task. Here, we provide the first analysis of two central phenomena: a form of social risk-aversion exhibited by the player who is in control of the interaction in the game; and irritation or anger, potentially exhibited by both players. Irritation arises when partners apparently defect, and it potentially causes a precipitate breakdown in cooperation. Failing to model one’s partner’s propensity for it leads to substantial economic inefficiency. We illustrate these behaviours using evidence drawn from the play of large cohorts of healthy volunteers and patients. We show that for both cohorts, a particular subtype of player is largely responsible for the breakdown of trust, a finding which sheds new light on borderline personality disorder. PMID:29447153
Self-Directed Cooperative Planetary Rovers
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo; Morris, Robert (Technical Monitor)
2003-01-01
The project is concerned with the development of decision-theoretic techniques to optimize the scientific return of planetary rovers. Planetary rovers are small unmanned vehicles equipped with cameras and a variety of sensors used for scientific experiments. They must operate under tight constraints over such resources as operation time, power, storage capacity, and communication bandwidth. Moreover, the limited computational resources of the rover limit the complexity of on-line planning and scheduling. We have developed a comprehensive solution to this problem that involves high-level tools to describe a mission; a compiler that maps a mission description and additional probabilistic models of the components of the rover into a Markov decision problem; and algorithms for solving the rover control problem that are sensitive to the limited computational resources and high-level of uncertainty in this domain.
Reinforcement Learning for Weakly-Coupled MDPs and an Application to Planetary Rover Control
NASA Technical Reports Server (NTRS)
Bernstein, Daniel S.; Zilberstein, Shlomo
2003-01-01
Weakly-coupled Markov decision processes can be decomposed into subprocesses that interact only through a small set of bottleneck states. We study a hierarchical reinforcement learning algorithm designed to take advantage of this particular type of decomposability. To test our algorithm, we use a decision-making problem faced by autonomous planetary rovers. In this problem, a Mars rover must decide which activities to perform and when to traverse between science sites in order to make the best use of its limited resources. In our experiments, the hierarchical algorithm performs better than Q-learning in the early stages of learning, but unlike Q-learning it converges to a suboptimal policy. This suggests that it may be advantageous to use the hierarchical algorithm when training time is limited.
Research on computer aided testing of pilot response to critical in-flight events
NASA Technical Reports Server (NTRS)
Giffin, W. C.; Rockwell, T. H.; Smith, P. J.
1984-01-01
Experiments on pilot decision making are described. The development of models of pilot decision making in critical in flight events (CIFE) are emphasized. The following tests are reported on the development of: (1) a frame system representation describing how pilots use their knowledge in a fault diagnosis task; (2) assessment of script norms, distance measures, and Markov models developed from computer aided testing (CAT) data; and (3) performance ranking of subject data. It is demonstrated that interactive computer aided testing either by touch CRT's or personal computers is a useful research and training device for measuring pilot information management in diagnosing system failures in simulated flight situations. Performance is dictated by knowledge of aircraft sybsystems, initial pilot structuring of the failure symptoms and efficient testing of plausible causal hypotheses.
Markov processes for the prediction of aircraft noise effects on sleep.
Basner, Mathias; Siebert, Uwe
2010-01-01
Aircraft noise disturbs sleep and impairs recuperation. Authorities plan to expand Frankfurt airport. To quantitatively assess the effects of a traffic curfew (11 PM to 5 AM) at Frankfurt Airport on sleep structure. Experimental sleep study; polysomnography for 13 consecutive nights. Sleep laboratory. Subjects. 128 healthy subjects, mean age (SD) 38 (13) years, range 19 to 65, 59% female. Intervention. Exposure to aircraft noise via loudspeakers. A 6-state Markov state transition sleep model was used to simulate 3 noise scenarios with first-order Monte Carlo simulations: 1) 2005 traffic at Frankfurt Airport, 2) as simulation 1 but flights between 11 PM and 5 AM cancelled, and 3) as simulation 2, with flights between 11 PM and 5 AM from simulation 1 rescheduled to periods before 11 PM and after 5 AM. Probabilities for transitions between sleep stages were estimated with autoregressive multinomial logistic regression. Compared to a night without curfew, models indicate small improvements in sleep structure in nights with curfew, even if all traffic is rescheduled to periods before and after the curfew period. For those who go to bed before 10:30 PM or after 1 AM, this benefit is likely to be offset by the expected increase of air traffic during late evening and early morning hours. Limitations. Limited ecologic validity due to laboratory setting and subject sample. According to the decision analysis, it is unlikely that the proposed curfew at Frankfurt Airport substantially benefits sleep structure. Extensions of the model could be used to evaluate or propose alternative air traffic regulation strategies for Frankfurt Airport.
Roze, S; Liens, D; Palmer, A; Berger, W; Tucker, D; Renaudin, C
2006-12-01
The aim of this study was to describe a health economic model developed to project lifetime clinical and cost outcomes of lipid-modifying interventions in patients not reaching target lipid levels and to assess the validity of the model. The internet-based, computer simulation model is made up of two decision analytic sub-models, the first utilizing Monte Carlo simulation, and the second applying Markov modeling techniques. Monte Carlo simulation generates a baseline cohort for long-term simulation by assigning an individual lipid profile to each patient, and applying the treatment effects of interventions under investigation. The Markov model then estimates the long-term clinical (coronary heart disease events, life expectancy, and quality-adjusted life expectancy) and cost outcomes up to a lifetime horizon, based on risk equations from the Framingham study. Internal and external validation analyses were performed. The results of the model validation analyses, plotted against corresponding real-life values from Framingham, 4S, AFCAPS/TexCAPS, and a meta-analysis by Gordon et al., showed that the majority of values were close to the y = x line, which indicates a perfect fit. The R2 value was 0.9575 and the gradient of the regression line was 0.9329, both very close to the perfect fit (= 1). Validation analyses of the computer simulation model suggest the model is able to recreate the outcomes from published clinical studies and would be a valuable tool for the evaluation of new and existing therapy options for patients with persistent dyslipidemia.
Adaptive Multi-scale Prognostics and Health Management for Smart Manufacturing Systems
Choo, Benjamin Y.; Adams, Stephen C.; Weiss, Brian A.; Marvel, Jeremy A.; Beling, Peter A.
2017-01-01
The Adaptive Multi-scale Prognostics and Health Management (AM-PHM) is a methodology designed to enable PHM in smart manufacturing systems. In application, PHM information is not yet fully utilized in higher-level decision-making in manufacturing systems. AM-PHM leverages and integrates lower-level PHM information such as from a machine or component with hierarchical relationships across the component, machine, work cell, and assembly line levels in a manufacturing system. The AM-PHM methodology enables the creation of actionable prognostic and diagnostic intelligence up and down the manufacturing process hierarchy. Decisions are then made with the knowledge of the current and projected health state of the system at decision points along the nodes of the hierarchical structure. To overcome the issue of exponential explosion of complexity associated with describing a large manufacturing system, the AM-PHM methodology takes a hierarchical Markov Decision Process (MDP) approach into describing the system and solving for an optimized policy. A description of the AM-PHM methodology is followed by a simulated industry-inspired example to demonstrate the effectiveness of AM-PHM. PMID:28736651
The use of economic evaluation in CAM: an introductory framework
2010-01-01
Background For CAM to feature prominently in health care decision-making there is a need to expand the evidence-base and to further incorporate economic evaluation into research priorities. In a world of scarce health care resources and an emphasis on efficiency and clinical efficacy, CAM, as indeed do all other treatments, requires rigorous evaluation to be considered in budget decision-making. Methods Economic evaluation provides the tools to measure the costs and health consequences of CAM interventions and thereby inform decision making. This article offers CAM researchers an introductory framework for understanding, undertaking and disseminating economic evaluation. The types of economic evaluation available for the study of CAM are discussed, and decision modelling is introduced as a method for economic evaluation with much potential for use in CAM. Two types of decision models are introduced, decision trees and Markov models, along with a worked example of how each method is used to examine costs and health consequences. This is followed by a discussion of how this information is used by decision makers. Conclusions Undoubtedly, economic evaluation methods form an important part of health care decision making. Without formal training it can seem a daunting task to consider economic evaluation, however, multidisciplinary teams provide an opportunity for health economists, CAM practitioners and other interested researchers, to work together to further develop the economic evaluation of CAM. PMID:21067622
The use of economic evaluation in CAM: an introductory framework.
Ford, Emily; Solomon, Daniela; Adams, Jon; Graves, Nicholas
2010-11-11
For CAM to feature prominently in health care decision-making there is a need to expand the evidence-base and to further incorporate economic evaluation into research priorities.In a world of scarce health care resources and an emphasis on efficiency and clinical efficacy, CAM, as indeed do all other treatments, requires rigorous evaluation to be considered in budget decision-making. Economic evaluation provides the tools to measure the costs and health consequences of CAM interventions and thereby inform decision making. This article offers CAM researchers an introductory framework for understanding, undertaking and disseminating economic evaluation. The types of economic evaluation available for the study of CAM are discussed, and decision modelling is introduced as a method for economic evaluation with much potential for use in CAM. Two types of decision models are introduced, decision trees and Markov models, along with a worked example of how each method is used to examine costs and health consequences. This is followed by a discussion of how this information is used by decision makers. Undoubtedly, economic evaluation methods form an important part of health care decision making. Without formal training it can seem a daunting task to consider economic evaluation, however, multidisciplinary teams provide an opportunity for health economists, CAM practitioners and other interested researchers, to work together to further develop the economic evaluation of CAM.
Entropy production rate as a criterion for inconsistency in decision theory
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.
2018-05-01
Individual and group decisions are complex, often involving choosing an apt alternative from a multitude of options. Evaluating pairwise comparisons breaks down such complex decision problems into tractable ones. Pairwise comparison matrices (PCMs) are regularly used to solve multiple-criteria decision-making problems, for example, using Saaty’s analytic hierarchy process (AHP) framework. However, there are two significant drawbacks of using PCMs. First, humans evaluate PCMs in an inconsistent manner. Second, not all entries of a large PCM can be reliably filled by human decision makers. We address these two issues by first establishing a novel connection between PCMs and time-irreversible Markov processes. Specifically, we show that every PCM induces a family of dissipative maximum path entropy random walks (MERW) over the set of alternatives. We show that only ‘consistent’ PCMs correspond to detailed balanced MERWs. We identify the non-equilibrium entropy production in the induced MERWs as a metric of inconsistency of the underlying PCMs. Notably, the entropy production satisfies all of the recently laid out criteria for reasonable consistency indices. We also propose an approach to use incompletely filled PCMs in AHP. Potential future avenues are discussed as well.
NASA Astrophysics Data System (ADS)
Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.
2012-12-01
In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.
Markov stochasticity coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: iddo.eliazar@intel.com
Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.
Son, Junbo; Brennan, Patricia Flatley; Zhou, Shiyu
2017-05-10
Asthma is a very common chronic disease that affects a large portion of population in many nations. Driven by the fast development in sensor and mobile communication technology, a smart asthma management system has become available to continuously monitor the key health indicators of asthma patients. Such data provides opportunities for healthcare practitioners to examine patients not only in the clinic (on-site) but also outside of the clinic (off-site) in their daily life. In this paper, taking advantage from this data availability, we propose a correlated gamma-based hidden Markov model framework, which can reveal and highlight useful information from the rescue inhaler-usage profiles of individual patients for practitioners. The proposed method can provide diagnostic information about the asthma control status of individual patients and can help practitioners to make more informed therapeutic decisions accordingly. The proposed method is validated through both numerical study and case study based on real world data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Colonoscopy video quality assessment using hidden Markov random fields
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby
2011-03-01
With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.
A Markovian model of evolving world input-output network
Isacchini, Giulio
2017-01-01
The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money. PMID:29065145
Modeling wildfire incident complexity dynamics.
Thompson, Matthew P
2013-01-01
Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management.
An economic evaluation of maxillary implant overdentures based on six vs. four implants.
Listl, Stefan; Fischer, Leonhard; Giannakopoulos, Nikolaos Nikitas
2014-08-18
The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients' denture satisfaction, the respective cost-effectiveness threshold varies substantially. The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes.
Cabergoline versus levodopa monotherapy: a decision analysis.
Smala, Antje M; Spottke, E Annika; Machat, Olaf; Siebert, Uwe; Meyer, Dieter; Köhne-Volland, Rudolf; Reuther, Martin; DuChane, Janeen; Oertel, Wolfgang H; Berger, Karin B; Dodel, Richard C
2003-08-01
We evaluated the incremental cost-effectiveness of cabergoline compared with levodopa monotherapy in patients with early Parkinson's disease (PD) in the German healthcare system. The study design was based on cost-effectiveness analysis using a Markov model with a 10-year time horizon. Model input data was based on a clinical trial "Early Treatment of PD with Cabergoline" as well as on cost data of a German hospital/office-based PD network. Direct and indirect medical and nonmedical costs were included. Outcomes were costs, disease stage, cumulative complication incidence, and mortality. An annual discount rate of 5% was applied and the societal perspective was chosen. The target population included patients in Hoehn and Yahr Stages I to III. It was found that the occurrence of motor complications was significantly lower in patients on cabergoline monotherapy. For patients aged >/=60 years of age, cabergoline monotherapy was cost effective when considering costs per decreased UPDRS score. Each point decrease in the UPDRS (I-IV) resulted in costs of euro;1,031. Incremental costs per additional motor complication-free patient were euro;104,400 for patients <60 years of age and euro;57,900 for patients >/=60 years of age. In conclusion, this decision-analytic model calculation for PD was based almost entirely on clinical and observed data with a limited number of assumptions. Although costs were higher in patients on cabergoline, the corresponding cost-effectiveness ratio for cabergoline was at least as favourable as the ratios for many commonly accepted therapies. Copyright 20032003 Movement Disorder Society
An economic evaluation of maxillary implant overdentures based on six vs. four implants
2014-01-01
Background The purpose of the present study was to assess the value for money achieved by bar-retained implant overdentures based on six implants compared with four implants as treatment alternatives for the edentulous maxilla. Methods A Markov decision tree model was constructed and populated with parameter estimates for implant and denture failure as well as patient-centred health outcomes as available from recent literature. The decision scenario was modelled within a ten year time horizon and relied on cost reimbursement regulations of the German health care system. The cost-effectiveness threshold was identified above which the six-implant solution is preferable over the four-implant solution. Uncertainties regarding input parameters were incorporated via one-way and probabilistic sensitivity analysis based on Monte-Carlo simulation. Results Within a base case scenario of average treatment complexity, the cost-effectiveness threshold was identified to be 17,564 € per year of denture satisfaction gained above of which the alternative with six implants is preferable over treatment including four implants. Sensitivity analysis yielded that, depending on the specification of model input parameters such as patients’ denture satisfaction, the respective cost-effectiveness threshold varies substantially. Conclusions The results of the present study suggest that bar-retained maxillary overdentures based on six implants provide better patient satisfaction than bar-retained overdentures based on four implants but are considerably more expensive. Final judgements about value for money require more comprehensive clinical evidence including patient-centred health outcomes. PMID:25135370
Dong, Hengjin; Buxton, Martin
2006-01-01
The objective of this study is to apply a Markov model to compare cost-effectiveness of total knee replacement (TKR) using computer-assisted surgery (CAS) with that of TKR using a conventional manual method in the absence of formal clinical trial evidence. A structured search was carried out to identify evidence relating to the clinical outcome, cost, and effectiveness of TKR. Nine Markov states were identified based on the progress of the disease after TKR. Effectiveness was expressed by quality-adjusted life years (QALYs). The simulation was carried out initially for 120 cycles of a month each, starting with 1,000 TKRs. A discount rate of 3.5 percent was used for both cost and effectiveness in the incremental cost-effectiveness analysis. Then, a probabilistic sensitivity analysis was carried out using a Monte Carlo approach with 10,000 iterations. Computer-assisted TKR was a long-term cost-effective technology, but the QALYs gained were small. After the first 2 years, the incremental cost per QALY of computer-assisted TKR was dominant because of cheaper and more QALYs. The incremental cost-effectiveness ratio (ICER) was sensitive to the "effect of CAS," to the CAS extra cost, and to the utility of the state "Normal health after primary TKR," but it was not sensitive to utilities of other Markov states. Both probabilistic and deterministic analyses produced similar cumulative serious or minor complication rates and complex or simple revision rates. They also produced similar ICERs. Compared with conventional TKR, computer-assisted TKR is a cost-saving technology in the long-term and may offer small additional QALYs. The "effect of CAS" is to reduce revision rates and complications through more accurate and precise alignment, and although the conclusions from the model, even when allowing for a full probabilistic analysis of uncertainty, are clear, the "effect of CAS" on the rate of revisions awaits long-term clinical evidence.
Optimal throughput for cognitive radio with energy harvesting in fading wireless channel.
Vu-Van, Hiep; Koo, Insoo
2014-01-01
Energy resource management is a crucial problem of a device with a finite capacity battery. In this paper, cognitive radio is considered to be a device with an energy harvester that can harvest energy from a non-RF energy resource while performing other actions of cognitive radio. Harvested energy will be stored in a finite capacity battery. At the start of the time slot of cognitive radio, the radio needs to determine if it should remain silent or carry out spectrum sensing based on the idle probability of the primary user and the remaining energy in order to maximize the throughput of the cognitive radio system. In addition, optimal sensing energy and adaptive transmission power control are also investigated in this paper to effectively utilize the limited energy of cognitive radio. Finding an optimal approach is formulated as a partially observable Markov decision process. The simulation results show that the proposed optimal decision scheme outperforms the myopic scheme in which current throughput is only considered when making a decision.
Theory of choice in bandit, information sampling and foraging tasks.
Averbeck, Bruno B
2015-03-01
Decision making has been studied with a wide array of tasks. Here we examine the theoretical structure of bandit, information sampling and foraging tasks. These tasks move beyond tasks where the choice in the current trial does not affect future expected rewards. We have modeled these tasks using Markov decision processes (MDPs). MDPs provide a general framework for modeling tasks in which decisions affect the information on which future choices will be made. Under the assumption that agents are maximizing expected rewards, MDPs provide normative solutions. We find that all three classes of tasks pose choices among actions which trade-off immediate and future expected rewards. The tasks drive these trade-offs in unique ways, however. For bandit and information sampling tasks, increasing uncertainty or the time horizon shifts value to actions that pay-off in the future. Correspondingly, decreasing uncertainty increases the relative value of actions that pay-off immediately. For foraging tasks the time-horizon plays the dominant role, as choices do not affect future uncertainty in these tasks.
Passive synchronization for Markov jump genetic oscillator networks with time-varying delays.
Lu, Li; He, Bing; Man, Chuntao; Wang, Shun
2015-04-01
In this paper, the synchronization problem of coupled Markov jump genetic oscillator networks with time-varying delays and external disturbances is investigated. By introducing the drive-response concept, a novel mode-dependent control scheme is proposed, which guarantees that the synchronization can be achieved. By applying the Lyapunov-Krasovskii functional method and stochastic analysis, sufficient conditions are established based on passivity theory in terms of linear matrix inequalities. A numerical example is provided to demonstrate the effectiveness of our theoretical results. Copyright © 2015 Elsevier Inc. All rights reserved.
Markov Jump-Linear Performance Models for Recoverable Flight Control Computers
NASA Technical Reports Server (NTRS)
Zhang, Hong; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
Single event upsets in digital flight control hardware induced by atmospheric neutrons can reduce system performance and possibly introduce a safety hazard. One method currently under investigation to help mitigate the effects of these upsets is NASA Langley s Recoverable Computer System. In this paper, a Markov jump-linear model is developed for a recoverable flight control system, which will be validated using data from future experiments with simulated and real neutron environments. The method of tracking error analysis and the plan for the experiments are also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pesin, Y.; Weiss, H.
1997-01-01
In this paper we establish the complete multifractal formalism for equilibrium measures for Holder continuous conformal expanding maps and expanding Markov Moran-like geometric constructions. Examples include Markov maps of an interval, beta transformations of an interval, rational maps with hyperbolic Julia sets, and conformal total endomorphisms. We also construct a Holder continuous homeomorphism of a compact metric space with an ergodic invariant measure of positive entropy for which the dimension spectrum is not convex, and hence the multifractal formalism fails.
NASA Astrophysics Data System (ADS)
Zhao, Wencai; Li, Juan; Zhang, Tongqian; Meng, Xinzhu; Zhang, Tonghua
2017-07-01
Taking into account of both white and colored noises, a stochastic mathematical model with impulsive toxicant input is formulated. Based on this model, we investigate dynamics, such as the persistence and ergodicity, of plant infectious disease model with Markov conversion in a polluted environment. The thresholds of extinction and persistence in mean are obtained. By using Lyapunov functions, we prove that the system is ergodic and has a stationary distribution under certain sufficient conditions. Finally, numerical simulations are employed to illustrate our theoretical analysis.
Applications of geostatistics and Markov models for logo recognition
NASA Astrophysics Data System (ADS)
Pham, Tuan
2003-01-01
Spatial covariances based on geostatistics are extracted as representative features of logo or trademark images. These spatial covariances are different from other statistical features for image analysis in that the structural information of an image is independent of the pixel locations and represented in terms of spatial series. We then design a classifier in the sense of hidden Markov models to make use of these geostatistical sequential data to recognize the logos. High recognition rates are obtained from testing the method against a public-domain logo database.
Metis: A Pure Metropolis Markov Chain Monte Carlo Bayesian Inference Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bates, Cameron Russell; Mckigney, Edward Allen
The use of Bayesian inference in data analysis has become the standard for large scienti c experiments [1, 2]. The Monte Carlo Codes Group(XCP-3) at Los Alamos has developed a simple set of algorithms currently implemented in C++ and Python to easily perform at-prior Markov Chain Monte Carlo Bayesian inference with pure Metropolis sampling. These implementations are designed to be user friendly and extensible for customization based on speci c application requirements. This document describes the algorithmic choices made and presents two use cases.
Multivariate longitudinal data analysis with mixed effects hidden Markov models.
Raffa, Jesse D; Dubin, Joel A
2015-09-01
Multiple longitudinal responses are often collected as a means to capture relevant features of the true outcome of interest, which is often hidden and not directly measurable. We outline an approach which models these multivariate longitudinal responses as generated from a hidden disease process. We propose a class of models which uses a hidden Markov model with separate but correlated random effects between multiple longitudinal responses. This approach was motivated by a smoking cessation clinical trial, where a bivariate longitudinal response involving both a continuous and a binomial response was collected for each participant to monitor smoking behavior. A Bayesian method using Markov chain Monte Carlo is used. Comparison of separate univariate response models to the bivariate response models was undertaken. Our methods are demonstrated on the smoking cessation clinical trial dataset, and properties of our approach are examined through extensive simulation studies. © 2015, The International Biometric Society.
Sentiment classification technology based on Markov logic networks
NASA Astrophysics Data System (ADS)
He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe
2016-07-01
With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.
Stability Analysis of Multi-Sensor Kalman Filtering over Lossy Networks
Gao, Shouwan; Chen, Pengpeng; Huang, Dan; Niu, Qiang
2016-01-01
This paper studies the remote Kalman filtering problem for a distributed system setting with multiple sensors that are located at different physical locations. Each sensor encapsulates its own measurement data into one single packet and transmits the packet to the remote filter via a lossy distinct channel. For each communication channel, a time-homogeneous Markov chain is used to model the normal operating condition of packet delivery and losses. Based on the Markov model, a necessary and sufficient condition is obtained, which can guarantee the stability of the mean estimation error covariance. Especially, the stability condition is explicitly expressed as a simple inequality whose parameters are the spectral radius of the system state matrix and transition probabilities of the Markov chains. In contrast to the existing related results, our method imposes less restrictive conditions on systems. Finally, the results are illustrated by simulation examples. PMID:27104541
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
Reliability Analysis for AFTI-F16 SRFCS Using ASSIST and SURE
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2001-01-01
This paper reports the results of a study on reliability analysis of an AFTI-16 Self-Repairing Flight Control System (SRFCS) using software tools SURE (Semi-Markov Unreliability Range Evaluator and ASSIST (Abstract Semi-Markov Specification Interface to the SURE Tool). The purpose of the study is to investigate the potential utility of the software tools in the ongoing effort of the NASA Aviation Safety Program, where the class of systems must be extended beyond the originally intended serving class of electronic digital processors. The study concludes that SURE and ASSIST are applicable to reliability, analysis of flight control systems. They are especially efficient for sensitivity analysis that quantifies the dependence of system reliability on model parameters. The study also confirms an earlier finding on the dominant role of a parameter called a failure coverage. The paper will remark on issues related to the improvement of coverage and the optimization of redundancy level.
A compositional framework for Markov processes
NASA Astrophysics Data System (ADS)
Baez, John C.; Fong, Brendan; Pollard, Blake S.
2016-03-01
We define the concept of an "open" Markov process, or more precisely, continuous-time Markov chain, which is one where probability can flow in or out of certain states called "inputs" and "outputs." One can build up a Markov process from smaller open pieces. This process is formalized by making open Markov processes into the morphisms of a dagger compact category. We show that the behavior of a detailed balanced open Markov process is determined by a principle of minimum dissipation, closely related to Prigogine's principle of minimum entropy production. Using this fact, we set up a functor mapping open detailed balanced Markov processes to open circuits made of linear resistors. We also describe how to "black box" an open Markov process, obtaining the linear relation between input and output data that holds in any steady state, including nonequilibrium steady states with a nonzero flow of probability through the system. We prove that black boxing gives a symmetric monoidal dagger functor sending open detailed balanced Markov processes to Lagrangian relations between symplectic vector spaces. This allows us to compute the steady state behavior of an open detailed balanced Markov process from the behaviors of smaller pieces from which it is built. We relate this black box functor to a previously constructed black box functor for circuits.
Eppinger, Ben; Walter, Maik; Li, Shu-Chen
2017-04-01
In this study, we investigated the interplay of habitual (model-free) and goal-directed (model-based) decision processes by using a two-stage Markov decision task in combination with event-related potentials (ERPs) and computational modeling. To manipulate the demands on model-based decision making, we applied two experimental conditions with different probabilities of transitioning from the first to the second stage of the task. As we expected, when the stage transitions were more predictable, participants showed greater model-based (planning) behavior. Consistent with this result, we found that stimulus-evoked parietal (P300) activity at the second stage of the task increased with the predictability of the state transitions. However, the parietal activity also reflected model-free information about the expected values of the stimuli, indicating that at this stage of the task both types of information are integrated to guide decision making. Outcome-related ERP components only reflected reward-related processes: Specifically, a medial prefrontal ERP component (the feedback-related negativity) was sensitive to negative outcomes, whereas a component that is elicited by reward (the feedback-related positivity) increased as a function of positive prediction errors. Taken together, our data indicate that stimulus-locked parietal activity reflects the integration of model-based and model-free information during decision making, whereas feedback-related medial prefrontal signals primarily reflect reward-related decision processes.
Job-mix modeling and system analysis of an aerospace multiprocessor.
NASA Technical Reports Server (NTRS)
Mallach, E. G.
1972-01-01
An aerospace guidance computer organization, consisting of multiple processors and memory units attached to a central time-multiplexed data bus, is described. A job mix for this type of computer is obtained by analysis of Apollo mission programs. Multiprocessor performance is then analyzed using: 1) queuing theory, under certain 'limiting case' assumptions; 2) Markov process methods; and 3) system simulation. Results of the analyses indicate: 1) Markov process analysis is a useful and efficient predictor of simulation results; 2) efficient job execution is not seriously impaired even when the system is so overloaded that new jobs are inordinately delayed in starting; 3) job scheduling is significant in determining system performance; and 4) a system having many slow processors may or may not perform better than a system of equal power having few fast processors, but will not perform significantly worse.
The SURE reliability analysis program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Hidden Markov model analysis of force/torque information in telemanipulation
NASA Technical Reports Server (NTRS)
Hannaford, Blake; Lee, Paul
1991-01-01
A model for the prediction and analysis of sensor information recorded during robotic performance of telemanipulation tasks is presented. The model uses the hidden Markov model to describe the task structure, the operator's or intelligent controller's goal structure, and the sensor signals. A methodology for constructing the model parameters based on engineering knowledge of the task is described. It is concluded that the model and its optimal state estimation algorithm, the Viterbi algorithm, are very succesful at the task of segmenting the data record into phases corresponding to subgoals of the task. The model provides a rich modeling structure within a statistical framework, which enables it to represent complex systems and be robust to real-world sensory signals.
Local Approximation and Hierarchical Methods for Stochastic Optimization
NASA Astrophysics Data System (ADS)
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry.
Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium.
Kapfer, Sebastian C; Krauth, Werner
2017-12-15
We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.
Irreversible Local Markov Chains with Rapid Convergence towards Equilibrium
NASA Astrophysics Data System (ADS)
Kapfer, Sebastian C.; Krauth, Werner
2017-12-01
We study the continuous one-dimensional hard-sphere model and present irreversible local Markov chains that mix on faster time scales than the reversible heat bath or Metropolis algorithms. The mixing time scales appear to fall into two distinct universality classes, both faster than for reversible local Markov chains. The event-chain algorithm, the infinitesimal limit of one of these Markov chains, belongs to the class presenting the fastest decay. For the lattice-gas limit of the hard-sphere model, reversible local Markov chains correspond to the symmetric simple exclusion process (SEP) with periodic boundary conditions. The two universality classes for irreversible Markov chains are realized by the totally asymmetric SEP (TASEP), and by a faster variant (lifted TASEP) that we propose here. We discuss how our irreversible hard-sphere Markov chains generalize to arbitrary repulsive pair interactions and carry over to higher dimensions through the concept of lifted Markov chains and the recently introduced factorized Metropolis acceptance rule.
Pollom, Erqi L; Lee, Kyueun; Durkee, Ben Y; Grade, Madeline; Mokhtari, Daniel A; Wahl, Daniel R; Feng, Mary; Kothary, Nishita; Koong, Albert C; Owens, Douglas K; Goldhaber-Fiebert, Jeremy; Chang, Daniel T
2017-05-01
Purpose To assess the cost-effectiveness of stereotactic body radiation therapy (SBRT) versus radiofrequency ablation (RFA) for patients with inoperable localized hepatocellular carcinoma (HCC) who are eligible for both SBRT and RFA. Materials and Methods A decision-analytic Markov model was developed for patients with inoperable, localized HCC who were eligible for both RFA and SBRT to evaluate the cost-effectiveness of the following treatment strategies: (a) SBRT as initial treatment followed by SBRT for local progression (SBRT-SBRT), (b) RFA followed by RFA for local progression (RFA-RFA), (c) SBRT followed by RFA for local progression (SBRT-RFA), and (d) RFA followed by SBRT for local progression (RFA-SBRT). Probabilities of disease progression, treatment characteristics, and mortality were derived from published studies. Outcomes included health benefits expressed as discounted quality-adjusted life years (QALYs), costs in U.S. dollars, and cost-effectiveness expressed as an incremental cost-effectiveness ratio. Deterministic and probabilistic sensitivity analysis was performed to assess the robustness of the findings. Results In the base case, SBRT-SBRT yielded the most QALYs (1.565) and cost $197 557. RFA-SBRT yielded 1.558 QALYs and cost $193 288. SBRT-SBRT was not cost-effective, at $558 679 per QALY gained relative to RFA-SBRT. RFA-SBRT was the preferred strategy, because RFA-RFA and SBRT-RFA were less effective and more costly. In all evaluated scenarios, SBRT was preferred as salvage therapy for local progression after RFA. Probabilistic sensitivity analysis showed that at a willingness-to-pay threshold of $100 000 per QALY gained, RFA-SBRT was preferred in 65.8% of simulations. Conclusion SBRT for initial treatment of localized, inoperable HCC is not cost-effective. However, SBRT is the preferred salvage therapy for local progression after RFA. © RSNA, 2017 Online supplemental material is available for this article.
Lee, Kyueun; Durkee, Ben Y.; Grade, Madeline; Mokhtari, Daniel A.; Wahl, Daniel R.; Feng, Mary; Kothary, Nishita; Koong, Albert C.; Owens, Douglas K.; Goldhaber-Fiebert, Jeremy; Chang, Daniel T.
2017-01-01
Purpose To assess the cost-effectiveness of stereotactic body radiation therapy (SBRT) versus radiofrequency ablation (RFA) for patients with inoperable localized hepatocellular carcinoma (HCC) who are eligible for both SBRT and RFA. Materials and Methods A decision-analytic Markov model was developed for patients with inoperable, localized HCC who were eligible for both RFA and SBRT to evaluate the cost-effectiveness of the following treatment strategies: (a) SBRT as initial treatment followed by SBRT for local progression (SBRT-SBRT), (b) RFA followed by RFA for local progression (RFA-RFA), (c) SBRT followed by RFA for local progression (SBRT-RFA), and (d) RFA followed by SBRT for local progression (RFA-SBRT). Probabilities of disease progression, treatment characteristics, and mortality were derived from published studies. Outcomes included health benefits expressed as discounted quality-adjusted life years (QALYs), costs in U.S. dollars, and cost-effectiveness expressed as an incremental cost-effectiveness ratio. Deterministic and probabilistic sensitivity analysis was performed to assess the robustness of the findings. Results In the base case, SBRT-SBRT yielded the most QALYs (1.565) and cost $197 557. RFA-SBRT yielded 1.558 QALYs and cost $193 288. SBRT-SBRT was not cost-effective, at $558 679 per QALY gained relative to RFA-SBRT. RFA-SBRT was the preferred strategy, because RFA-RFA and SBRT-RFA were less effective and more costly. In all evaluated scenarios, SBRT was preferred as salvage therapy for local progression after RFA. Probabilistic sensitivity analysis showed that at a willingness-to-pay threshold of $100 000 per QALY gained, RFA-SBRT was preferred in 65.8% of simulations. Conclusion SBRT for initial treatment of localized, inoperable HCC is not cost-effective. However, SBRT is the preferred salvage therapy for local progression after RFA. © RSNA, 2017 Online supplemental material is available for this article. PMID:28045603
Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen
2014-01-01
This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
NASA Astrophysics Data System (ADS)
Li, Zhiqiang; Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-04-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit.
Multiframe video coding for improved performance over wireless channels.
Budagavi, M; Gibson, J D
2001-01-01
We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.
Finite grade pheromone ant colony optimization for image segmentation
NASA Astrophysics Data System (ADS)
Yuanjing, F.; Li, Y.; Liangjun, K.
2008-06-01
By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.
Hidden Markov models for evolution and comparative genomics analysis.
Bykova, Nadezda A; Favorov, Alexander V; Mironov, Andrey A
2013-01-01
The problem of reconstruction of ancestral states given a phylogeny and data from extant species arises in a wide range of biological studies. The continuous-time Markov model for the discrete states evolution is generally used for the reconstruction of ancestral states. We modify this model to account for a case when the states of the extant species are uncertain. This situation appears, for example, if the states for extant species are predicted by some program and thus are known only with some level of reliability; it is common for bioinformatics field. The main idea is formulation of the problem as a hidden Markov model on a tree (tree HMM, tHMM), where the basic continuous-time Markov model is expanded with the introduction of emission probabilities of observed data (e.g. prediction scores) for each underlying discrete state. Our tHMM decoding algorithm allows us to predict states at the ancestral nodes as well as to refine states at the leaves on the basis of quantitative comparative genomics. The test on the simulated data shows that the tHMM approach applied to the continuous variable reflecting the probabilities of the states (i.e. prediction score) appears to be more accurate then the reconstruction from the discrete states assignment defined by the best score threshold. We provide examples of applying our model to the evolutionary analysis of N-terminal signal peptides and transcription factor binding sites in bacteria. The program is freely available at http://bioinf.fbb.msu.ru/~nadya/tHMM and via web-service at http://bioinf.fbb.msu.ru/treehmmweb.
Single-molecule FRET reveals the energy landscape of the full-length SAM-I riboswitch.
Manz, Christoph; Kobitski, Andrei Yu; Samanta, Ayan; Keller, Bettina G; Jäschke, Andres; Nienhaus, G Ulrich
2017-11-01
S-adenosyl-L-methionine (SAM) ligand binding induces major structural changes in SAM-I riboswitches, through which gene expression is regulated via transcription termination. Little is known about the conformations and motions governing the function of the full-length Bacillus subtilis yitJ SAM-I riboswitch. Therefore, we have explored its conformational energy landscape as a function of Mg 2+ and SAM ligand concentrations using single-molecule Förster resonance energy transfer (smFRET) microscopy and hidden Markov modeling analysis. We resolved four conformational states both in the presence and the absence of SAM and determined their Mg 2+ -dependent fractional populations and conformational dynamics, including state lifetimes, interconversion rate coefficients and equilibration timescales. Riboswitches with terminator and antiterminator folds coexist, and SAM binding only gradually shifts the populations toward terminator states. We observed a pronounced acceleration of conformational transitions upon SAM binding, which may be crucial for off-switching during the brief decision window before expression of the downstream gene.
Benchmarking for Bayesian Reinforcement Learning
Ernst, Damien; Couëtoux, Adrien
2016-01-01
In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891
Gandjour, Afschin; Stock, Stephanie
2007-10-01
Almost 15 million Germans may suffer from untreated hypertension. The purpose of this paper is to estimate the cost-effectiveness of a national hypertension treatment program compared to no program. A Markov decision model from the perspective of the statutory health insurance (SHI) was built. All data were taken from secondary sources. The target population consists of hypertensive male and female patients at high or low risk for cardiovascular events at different age groups (40-49, 50-59, and 60-69 years). The analysis shows fairly moderate cost-effectiveness ratios even for low-risk groups (less than 12,000 euros per life year gained). In women at high risk antihypertensive treatment even leads to savings. This suggests that a national hypertension treatment program provides good value for money. Given the considerable costs of the program itself, any savings from avoiding long-term consequences of hypertension are likely to be offset, however.
Benchmarking for Bayesian Reinforcement Learning.
Castronovo, Michael; Ernst, Damien; Couëtoux, Adrien; Fonteneau, Raphael
2016-01-01
In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed.
Zeidan, Amer M; Gore, Steven D
2013-10-01
Myelodysplastic syndromes (MDS) include a group of hematopoietic malignancies characterized by dysplastic changes, ineffective hematopoiesis and variable risk of leukemic progression. At diagnosis, 86% of MDS patients are ≥60 years. Azacitidine, the only drug that prolongs life in high-risk (HR)-MDS patients, adds a median of only 9.5 months to life. Allogeneic stem cell transplantation (alloSCT) remains the only potentially curative approach. Despite recent improvements including use of reduced intensity conditioning (RIC) that decrease transplant-related mortality, alloSCT continues to be used rarely in elderly MDS. There is paucity of data regarding outcomes of RIC alloSCT in elderly MDS patients, especially in direct comparison with azanucleosides. In this paper, the authors discuss the recent Markov decision analysis by Koreth et al. in which investigators demonstrated superior survival of patients with HR-MDS aged 60-70 years who underwent RIC alloSCT in comparison with those who were treated with azanucleosides.
Open Markov Processes and Reaction Networks
ERIC Educational Resources Information Center
Swistock Pollard, Blake Stephen
2017-01-01
We begin by defining the concept of "open" Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain "boundary" states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow…
Bayesian Analysis of Biogeography when the Number of Areas is Large
Landis, Michael J.; Matzke, Nicholas J.; Moore, Brian R.; Huelsenbeck, John P.
2013-01-01
Historical biogeography is increasingly studied from an explicitly statistical perspective, using stochastic models to describe the evolution of species range as a continuous-time Markov process of dispersal between and extinction within a set of discrete geographic areas. The main constraint of these methods is the computational limit on the number of areas that can be specified. We propose a Bayesian approach for inferring biogeographic history that extends the application of biogeographic models to the analysis of more realistic problems that involve a large number of areas. Our solution is based on a “data-augmentation” approach, in which we first populate the tree with a history of biogeographic events that is consistent with the observed species ranges at the tips of the tree. We then calculate the likelihood of a given history by adopting a mechanistic interpretation of the instantaneous-rate matrix, which specifies both the exponential waiting times between biogeographic events and the relative probabilities of each biogeographic change. We develop this approach in a Bayesian framework, marginalizing over all possible biogeographic histories using Markov chain Monte Carlo (MCMC). Besides dramatically increasing the number of areas that can be accommodated in a biogeographic analysis, our method allows the parameters of a given biogeographic model to be estimated and different biogeographic models to be objectively compared. Our approach is implemented in the program, BayArea. [ancestral area analysis; Bayesian biogeographic inference; data augmentation; historical biogeography; Markov chain Monte Carlo.] PMID:23736102
The explicit form of the rate function for semi-Markov processes and its contractions
NASA Astrophysics Data System (ADS)
Sughiyama, Yuki; Kobayashi, Testuya J.
2018-03-01
We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.
Communication: Introducing prescribed biases in out-of-equilibrium Markov models
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.
2018-03-01
Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.
BAT - The Bayesian analysis toolkit
NASA Astrophysics Data System (ADS)
Caldwell, Allen; Kollár, Daniel; Kröninger, Kevin
2009-11-01
We describe the development of a new toolkit for data analysis. The analysis package is based on Bayes' Theorem, and is realized with the use of Markov Chain Monte Carlo. This gives access to the full posterior probability distribution. Parameter estimation, limit setting and uncertainty propagation are implemented in a straightforward manner.
Poissonian steady states: from stationary densities to stationary intensities.
Eliazar, Iddo
2012-10-01
Markov dynamics are the most elemental and omnipresent form of stochastic dynamics in the sciences, with applications ranging from physics to chemistry, from biology to evolution, and from economics to finance. Markov dynamics can be either stationary or nonstationary. Stationary Markov dynamics represent statistical steady states and are quantified by stationary densities. In this paper, we generalize the notion of steady state to the case of general Markov dynamics. Considering an ensemble of independent motions governed by common Markov dynamics, we establish that the entire ensemble attains Poissonian steady states which are quantified by stationary Poissonian intensities and which hold valid also in the case of nonstationary Markov dynamics. The methodology is applied to a host of Markov dynamics, including Brownian motion, birth-death processes, random walks, geometric random walks, renewal processes, growth-collapse dynamics, decay-surge dynamics, Ito diffusions, and Langevin dynamics.
Poissonian steady states: From stationary densities to stationary intensities
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2012-10-01
Markov dynamics are the most elemental and omnipresent form of stochastic dynamics in the sciences, with applications ranging from physics to chemistry, from biology to evolution, and from economics to finance. Markov dynamics can be either stationary or nonstationary. Stationary Markov dynamics represent statistical steady states and are quantified by stationary densities. In this paper, we generalize the notion of steady state to the case of general Markov dynamics. Considering an ensemble of independent motions governed by common Markov dynamics, we establish that the entire ensemble attains Poissonian steady states which are quantified by stationary Poissonian intensities and which hold valid also in the case of nonstationary Markov dynamics. The methodology is applied to a host of Markov dynamics, including Brownian motion, birth-death processes, random walks, geometric random walks, renewal processes, growth-collapse dynamics, decay-surge dynamics, Ito diffusions, and Langevin dynamics.
Neyman, Markov processes and survival analysis.
Yang, Grace
2013-07-01
J. Neyman used stochastic processes extensively in his applied work. One example is the Fix and Neyman (F-N) competing risks model (1951) that uses finite homogeneous Markov processes to analyse clinical trials with breast cancer patients. We revisit the F-N model, and compare it with the Kaplan-Meier (K-M) formulation for right censored data. The comparison offers a way to generalize the K-M formulation to include risks of recovery and relapses in the calculation of a patient's survival probability. The generalization is to extend the F-N model to a nonhomogeneous Markov process. Closed-form solutions of the survival probability are available in special cases of the nonhomogeneous processes, like the popular multiple decrement model (including the K-M model) and Chiang's staging model, but these models do not consider recovery and relapses while the F-N model does. An analysis of sero-epidemiology current status data with recurrent events is illustrated. Fix and Neyman used Neyman's RBAN (regular best asymptotic normal) estimates for the risks, and provided a numerical example showing the importance of considering both the survival probability and the length of time of a patient living a normal life in the evaluation of clinical trials. The said extension would result in a complicated model and it is unlikely to find analytical closed-form solutions for survival analysis. With ever increasing computing power, numerical methods offer a viable way of investigating the problem.
Global dynamics of a stochastic neuronal oscillator
NASA Astrophysics Data System (ADS)
Yamanobe, Takanobu
2013-11-01
Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.
Global dynamics of a stochastic neuronal oscillator.
Yamanobe, Takanobu
2013-11-01
Nonlinear oscillators have been used to model neurons that fire periodically in the absence of input. These oscillators, which are called neuronal oscillators, share some common response structures with other biological oscillations such as cardiac cells. In this study, we analyze the dependence of the global dynamics of an impulse-driven stochastic neuronal oscillator on the relaxation rate to the limit cycle, the strength of the intrinsic noise, and the impulsive input parameters. To do this, we use a Markov operator that both reflects the density evolution of the oscillator and is an extension of the phase transition curve, which describes the phase shift due to a single isolated impulse. Previously, we derived the Markov operator for the finite relaxation rate that describes the dynamics of the entire phase plane. Here, we construct a Markov operator for the infinite relaxation rate that describes the stochastic dynamics restricted to the limit cycle. In both cases, the response of the stochastic neuronal oscillator to time-varying impulses is described by a product of Markov operators. Furthermore, we calculate the number of spikes between two consecutive impulses to relate the dynamics of the oscillator to the number of spikes per unit time and the interspike interval density. Specifically, we analyze the dynamics of the number of spikes per unit time based on the properties of the Markov operators. Each Markov operator can be decomposed into stationary and transient components based on the properties of the eigenvalues and eigenfunctions. This allows us to evaluate the difference in the number of spikes per unit time between the stationary and transient responses of the oscillator, which we show to be based on the dependence of the oscillator on past activity. Our analysis shows how the duration of the past neuronal activity depends on the relaxation rate, the noise strength, and the impulsive input parameters.
Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus
2014-01-01
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work.
Singer, Philipp; Helic, Denis; Taraghi, Behnam; Strohmaier, Markus
2014-01-01
One of the most frequently used models for understanding human navigation on the Web is the Markov chain model, where Web pages are represented as states and hyperlinks as probabilities of navigating from one page to another. Predominantly, human navigation on the Web has been thought to satisfy the memoryless Markov property stating that the next page a user visits only depends on her current page and not on previously visited ones. This idea has found its way in numerous applications such as Google's PageRank algorithm and others. Recently, new studies suggested that human navigation may better be modeled using higher order Markov chain models, i.e., the next page depends on a longer history of past clicks. Yet, this finding is preliminary and does not account for the higher complexity of higher order Markov chain models which is why the memoryless model is still widely used. In this work we thoroughly present a diverse array of advanced inference methods for determining the appropriate Markov chain order. We highlight strengths and weaknesses of each method and apply them for investigating memory and structure of human navigation on the Web. Our experiments reveal that the complexity of higher order models grows faster than their utility, and thus we confirm that the memoryless model represents a quite practical model for human navigation on a page level. However, when we expand our analysis to a topical level, where we abstract away from specific page transitions to transitions between topics, we find that the memoryless assumption is violated and specific regularities can be observed. We report results from experiments with two types of navigational datasets (goal-oriented vs. free form) and observe interesting structural differences that make a strong argument for more contextual studies of human navigation in future work. PMID:25013937
On a Result for Finite Markov Chains
ERIC Educational Resources Information Center
Kulathinal, Sangita; Ghosh, Lagnojita
2006-01-01
In an undergraduate course on stochastic processes, Markov chains are discussed in great detail. Textbooks on stochastic processes provide interesting properties of finite Markov chains. This note discusses one such property regarding the number of steps in which a state is reachable or accessible from another state in a finite Markov chain with M…
The Markov blankets of life: autonomy, active inference and the free energy principle
Palacios, Ensor; Friston, Karl; Kiverstein, Julian
2018-01-01
This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens, in terms of the presence of Markov blankets under the active inference scheme—a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets—all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment. PMID:29343629
Markov Chain Analysis of Musical Dice Games
NASA Astrophysics Data System (ADS)
Volchenkov, D.; Dawin, J. R.
2012-07-01
A system for using dice to compose music randomly is known as the musical dice game. The discrete time MIDI models of 804 pieces of classical music written by 29 composers have been encoded into the transition matrices and studied by Markov chains. Contrary to human languages, entropy dominates over redundancy, in the musical dice games based on the compositions of classical music. The maximum complexity is achieved on the blocks consisting of just a few notes (8 notes, for the musical dice games generated over Bach's compositions). First passage times to notes can be used to resolve tonality and feature a composer.
Entanglement revival can occur only when the system-environment state is not a Markov state
NASA Astrophysics Data System (ADS)
Sargolzahi, Iman
2018-06-01
Markov states have been defined for tripartite quantum systems. In this paper, we generalize the definition of the Markov states to arbitrary multipartite case and find the general structure of an important subset of them, which we will call strong Markov states. In addition, we focus on an important property of the Markov states: If the initial state of the whole system-environment is a Markov state, then each localized dynamics of the whole system-environment reduces to a localized subdynamics of the system. This provides us a necessary condition for entanglement revival in an open quantum system: Entanglement revival can occur only when the system-environment state is not a Markov state. To illustrate (a part of) our results, we consider the case that the environment is modeled as classical. In this case, though the correlation between the system and the environment remains classical during the evolution, the change of the state of the system-environment, from its initial Markov state to a state which is not a Markov one, leads to the entanglement revival in the system. This shows that the non-Markovianity of a state is not equivalent to the existence of non-classical correlation in it, in general.
Ekwunife, Obinna I.
2017-01-01
Background Diarrhoea is a leading cause of death in Nigerian children under 5 years. Implementing the most cost-effective approach to diarrhoea management in Nigeria will help optimize health care resources allocation. This study evaluated the cost-effectiveness of various approaches to diarrhoea management namely: the ‘no treatment’ approach (NT); the preventive approach with rotavirus vaccine; the integrated management of childhood illness for diarrhoea approach (IMCI); and rotavirus vaccine plus integrated management of childhood illness for diarrhoea approach (rotavirus vaccine + IMCI). Methods Markov cohort model conducted from the payer’s perspective was used to calculate the cost-effectiveness of the four interventions. The markov model simulated a life cycle of 260 weeks for 33 million children under five years at risk of having diarrhoea (well state). Disability adjusted life years (DALYs) averted was used to quantify clinical outcome. Incremental cost-effectiveness ratio (ICER) served as measure of cost-effectiveness. Results Based on cost-effectiveness threshold of $2,177.99 (i.e. representing Nigerian GDP/capita), all the approaches were very cost-effective but rotavirus vaccine approach was dominated. While IMCI has the lowest ICER of $4.6/DALY averted, the addition of rotavirus vaccine was cost-effective with an ICER of $80.1/DALY averted. Rotavirus vaccine alone was less efficient in optimizing health care resource allocation. Conclusion Rotavirus vaccine + IMCI approach was the most cost-effective approach to childhood diarrhoea management. Its awareness and practice should be promoted in Nigeria. Addition of rotavirus vaccine should be considered for inclusion in the national programme of immunization. Although our findings suggest that addition of rotavirus vaccine to IMCI for diarrhoea is cost-effective, there may be need for further vaccine demonstration studies or real life studies to establish the cost-effectiveness of the vaccine in Nigeria. PMID:29261649
Cost-Effectiveness of a Community Pharmacist-Led Sleep Apnea Screening Program - A Markov Model.
Perraudin, Clémence; Le Vaillant, Marc; Pelletier-Fleury, Nathalie
2013-01-01
Despite the high prevalence and major public health ramifications, obstructive sleep apnea syndrome (OSAS) remains underdiagnosed. In many developed countries, because community pharmacists (CP) are easily accessible, they have been developing additional clinical services that integrate the services of and collaborate with other healthcare providers (general practitioners (GPs), nurses, etc.). Alternative strategies for primary care screening programs for OSAS involving the CP are discussed. To estimate the quality of life, costs, and cost-effectiveness of three screening strategies among patients who are at risk of having moderate to severe OSAS in primary care. Markov decision model. Published data. Hypothetical cohort of 50-year-old male patients with symptoms highly evocative of OSAS. The 5 years after initial evaluation for OSAS. Societal. Screening strategy with CP (CP-GP collaboration), screening strategy without CP (GP alone) and no screening. Quality of life, survival and costs for each screening strategy. Under almost all modeled conditions, the involvement of CPs in OSAS screening was cost effective. The maximal incremental cost for "screening strategy with CP" was about 455€ per QALY gained. Our results were robust but primarily sensitive to the treatment costs by continuous positive airway pressure, and the costs of untreated OSAS. The probabilistic sensitivity analysis showed that the "screening strategy with CP" was dominant in 80% of cases. It was more effective and less costly in 47% of cases, and within the cost-effective range (maximum incremental cost effectiveness ratio at €6186.67/QALY) in 33% of cases. CP involvement in OSAS screening is a cost-effective strategy. This proposal is consistent with the trend in Europe and the United States to extend the practices and responsibilities of the pharmacist in primary care.
Schmier, Jordana K; Lau, Edmund C; Patel, Jasmine D; Klenk, Juergen A; Greenspon, Arnold J
2017-11-01
The effects of device and patient characteristics on health and economic outcomes in patients with cardiac implantable electronic devices (CIEDs) are unclear. Modeling can estimate costs and outcomes for patients with CIEDs under a variety of scenarios, varying battery longevity, comorbidities, and care settings. The objective of this analysis was to compare changes in patient outcomes and payer costs attributable to increases in battery life of implantable cardiac defibrillators (ICDs) and cardiac resynchronization therapy defibrillators (CRT-D). We developed a Monte Carlo Markov model simulation to follow patients through primary implant, postoperative maintenance, generator replacement, and revision states. Patients were simulated in 3-month increments for 15 years or until death. Key variables included Charlson Comorbidity Index, CIED type, legacy versus extended battery longevity, mortality rates (procedure and all-cause), infection and non-infectious complication rates, and care settings. Costs included procedure-related (facility and professional), maintenance, and infections and non-infectious complications, all derived from Medicare data (2004-2014, 5% sample). Outcomes included counts of battery replacements, revisions, infections and non-infectious complications, and discounted (3%) costs and life years. An increase in battery longevity in ICDs yielded reductions in numbers of revisions (by 23%), battery changes (by 44%), infections (by 23%), non-infectious complications (by 10%), and total costs per patient (by 9%). Analogous reductions for CRT-Ds were 23% (revisions), 32% (battery changes), 22% (infections), 8% (complications), and 10% (costs). Based on modeling results, as battery longevity increases, patients experience fewer adverse outcomes and healthcare costs are reduced. Understanding the magnitude of the cost benefit of extended battery life can inform budgeting and planning decisions by healthcare providers and insurers.
Principles of health economic evaluations of lipid-lowering strategies.
Ara, Roberta; Basarir, Hasan; Ward, Sue Elizabeth
2012-08-01
Policy decision-making in cardiovascular disease is increasingly informed by the results generated from decision-analytic models (DAMs). The methodological approaches and assumptions used in these DAMs impact on the results generated and can influence a policy decision based on a cost per quality-adjusted life year (QALY) threshold. Decision makers need to be provided with a clear understanding of the key sources of evidence and how they are used in the DAM to make an informed judgement on the quality and appropriateness of the results generated. Our review identified 12 studies exploring the cost-effectiveness of pharmaceutical lipid-lowering interventions published since January 2010. All studies used Markov models with annual cycles to represent the long-term clinical pathway. Important differences in the model structures and evidence base used within the DAMs were identified. Whereas the reporting standards were reasonably good, there were many instances when reporting of methods could be improved, particularly relating to baseline risk levels, long-term benefit of treatment and health state utility values. There is a scope for improvement in the reporting of evidence and modelling approaches used within DAMs to provide decision makers with a clearer understanding of the quality and validity of the results generated. This would be assisted by fuller publication of models, perhaps through detailed web appendices.
Sobel, E.; Lange, K.
1996-01-01
The introduction of stochastic methods in pedigree analysis has enabled geneticists to tackle computations intractable by standard deterministic methods. Until now these stochastic techniques have worked by running a Markov chain on the set of genetic descent states of a pedigree. Each descent state specifies the paths of gene flow in the pedigree and the founder alleles dropped down each path. The current paper follows up on a suggestion by Elizabeth Thompson that genetic descent graphs offer a more appropriate space for executing a Markov chain. A descent graph specifies the paths of gene flow but not the particular founder alleles traveling down the paths. This paper explores algorithms for implementing Thompson's suggestion for codominant markers in the context of automatic haplotyping, estimating location scores, and computing gene-clustering statistics for robust linkage analysis. Realistic numerical examples demonstrate the feasibility of the algorithms. PMID:8651310
Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation
NASA Astrophysics Data System (ADS)
Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill
2012-06-01
Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.
A lifetime Markov model for the economic evaluation of chronic obstructive pulmonary disease.
Menn, Petra; Leidl, Reiner; Holle, Rolf
2012-09-01
Chronic obstructive pulmonary disease (COPD) is currently the fourth leading cause of death worldwide. It has serious health effects and causes substantial costs for society. The aim of the present paper was to develop a state-of-the-art decision-analytic model of COPD whereby the cost effectiveness of interventions in Germany can be estimated. To demonstrate the applicability of the model, a smoking cessation programme was evaluated against usual care. A seven-stage Markov model (disease stages I to IV according to the GOLD [Global Initiative for Chronic Obstructive Lung Disease] classification, states after lung-volume reduction surgery and lung transplantation, death) was developed to conduct a cost-utility analysis from the societal perspective over a time horizon of 10, 40 and 60 years. Patients entered the cohort model at the age of 45 with mild COPD. Exacerbations were classified into three levels: mild, moderate and severe. Estimation of stage-specific probabilities (for smokers and quitters), utilities and costs was based on German data where possible. Data on effectiveness of the intervention was retrieved from the literature. A discount rate of 3% was applied to costs and effects. Probabilistic sensitivity analysis was used to assess the robustness of the results. The smoking cessation programme was the dominant strategy compared with usual care, and the intervention resulted in an increase in health effects of 0.54 QALYs and a cost reduction of &U20AC;1115 per patient (year 2007 prices) after 60 years. In the probabilistic analysis, the intervention dominated in about 95% of the simulations. Sensitivity analyses showed that uncertainty primarily originated from data on disease progression and treatment cost in the early stages of disease. The model developed allows the long-term cost effectiveness of interventions to be estimated, and has been adapted to Germany. The model suggests that the smoking cessation programme evaluated was more effective than usual care as well as being cost-saving. Most patients had mild or moderate COPD, stages for which parameter uncertainty was found to be high. This raises the need to improve data on the early stages of COPD.
Shida, Toshihiro; Endo, Yuji; Shiraishi, Tadashi; Yoshioka, Takashi; Suzuki, Kaoru; Kobayashi, Yuka; Ono, Yuki; Ito, Toshinori; Inoue, Tadao
2018-01-01
We evaluated four representative chemotherapy regimens for unresectable advanced or recurrent KRAS-wild type colorectal cancer: mFOLFOX6, mFOLFOX6+bevacizumab (Bmab), cetuximab (Cmab), or panitumumab (Pmab). We employed a decision analysis method in combination with clinical and economic evidence. The health outcomes of the regimens were analyzed on the basis of overall and progression-free survival. The data were drawn from the literature on randomized controlled clinical trials of the above-mentioned drugs. The total costs of the regimens were calculated on the basis of direct costs obtained from the medical records of patients diagnosed with unresectable advanced or recurrent colorectal cancer at Yamagata University Hospital and Yamagata Prefecture Central Hospital. Cost effectiveness was analyzed using a Markov chain Monte Carlo (MCMC) method. The study was designed from the viewpoint of public medical care. The MCMC analysis revealed that expected life months and expected cost were 20 months/3,527,119 yen for mFOLFOX6, 27 months/8,270,625 yen for mFOLFOX6+Bmab, 29 months/13,174,6297 yen for mFOLFOX6+Cmab, and 6 months/12,613,445 yen for mFOLFOX6+Pmab. Incremental costs per effectiveness ratios per life month against mFOLFOX6 were 637,592 yen for mFOLFOX6+Bmab, 1,075,162 yen for mFOLFOX6+Cmab, and 587,455 yen for mFOLFOX6+Pmab. Compared to the conventional mFOLFOX6 regimen, molecular-targeted drug regimens provide better health outcomes, but the cost increases accordingly. mFOLFOX 6+Pmab is the most cost-effective regimen among those surveyed in this study.
Castro Jaramillo, Héctor Eduardo; Moreno Viscaya, Mabel; Mejia, Aurelio E
2016-01-01
This article presents a cost-utility analysis from the Colombian health system perspective comparing primary prophylaxis to on-demand treatment using exogenous clotting factor VIII (FVIII) for patients with severe hemophilia type A. We developed a Markov model to estimate expected costs and outcomes (measured as quality-adjusted life-years, QALYs) for each strategy. Transition probabilities were estimated using published studies; utility weights were obtained from a sample of Colombian patients with hemophilia and costs were gathered using local data. Both deterministic and probabilistic sensitivity analysis were performed to assess the robustness of results. The additional cost per QALY gained of primary prophylaxis compared with on-demand treatment was 105,081,022 Colombian pesos (COP) (55,204 USD), and thus not considered cost-effective according to a threshold of up to three times the current Colombian gross domestic product (GDP) per-capita. When primary prophylaxis was provided throughout life using recombinant FVIII (rFVIII), which is much costlier than FVIII, the additional cost per QALY gained reached 174,159,553 COP (91,494 USD). using a decision rule of up to three times the Colombian GDP per capita, primary prophylaxis (with either FVIII or rFVIII) would not be considered as cost-effective in this country. However, a final decision on providing or preventing patients from primary prophylaxis as a gold standard of care for severe hemophilia type A should also consider broader criteria than the incremental cost-effectiveness ratio results itself. Only a price reduction of exogenous FVIII of 50 percent or more would make primary prophylaxis cost-effective in this context.
Zhang, Qiong; van Vugt, Marieke; Borst, Jelmer P; Anderson, John R
2018-07-01
In this study, we investigated the time course and neural correlates of the retrieval process underlying visual working memory. We made use of a rare dataset in which the same task was recorded using both scalp electroencephalography (EEG) and Electrocorticography (ECoG), respectively. This allowed us to examine with great spatial and temporal detail how the retrieval process works, and in particular how the medial temporal lobe (MTL) is involved. In each trial, participants judged whether a probe face had been among a set of recently studied faces. With a method that combines hidden semi-Markov models and multivariate pattern analysis, the neural signal was decomposed into a sequence of latent cognitive stages with information about their durations on a trial-by-trial basis. Analyzed separately, EEG and ECoG data yielded converging results on discovered stages and their interpretation, which reflected 1) a brief pre-attention stage, 2) encoding the stimulus, 3) retrieving the studied set, and 4) making a decision. Combining these stages with the high spatial resolution of ECoG suggested that activity in the temporal cortex reflected item familiarity in the retrieval stage; and that once retrieval is complete, there is active maintenance of the studied face set in the decision stage in the MTL. During this same period, the frontal cortex guides the decision by means of theta coupling with the MTL. These observations generalize previous findings on the role of MTL theta from long-term memory tasks to short-term memory tasks. Copyright © 2018 Elsevier Inc. All rights reserved.
Selection of first-line therapy in multiple sclerosis using risk-benefit decision analysis.
Bargiela, David; Bianchi, Matthew T; Westover, M Brandon; Chibnik, Lori B; Healy, Brian C; De Jager, Philip L; Xia, Zongqi
2017-02-14
To integrate long-term measures of disease-modifying drug efficacy and risk to guide selection of first-line treatment of multiple sclerosis. We created a Markov decision model to evaluate disability worsening and progressive multifocal leukoencephalopathy (PML) risk in patients receiving natalizumab (NTZ), fingolimod (FGL), or glatiramer acetate (GA) over 30 years. Leveraging publicly available data, we integrated treatment utility, disability worsening, and risk of PML into quality-adjusted life-years (QALYs). We performed sensitivity analyses varying PML risk, mortality and morbidity, and relative risk of disease worsening across clinically relevant ranges. Over the entire reported range of NTZ-associated PML risk, NTZ as first-line therapy is predicted to provide a greater net benefit (15.06 QALYs) than FGL (13.99 QALYs) or GA (12.71 QALYs) treatment over 30 years, after accounting for loss of QALYs due to PML or death (resulting from all causes). NTZ treatment is associated with delayed worsening to an Expanded Disability Status Scale score ≥6.0 vs FGL or GA (22.7, 17.0, and 12.4 years, respectively). Compared to untreated patients, NTZ-treated patients have a greater relative risk of death in the early years of treatment that varies according to PML risk profile. NTZ as a first-line treatment is associated with the highest net benefit across full ranges of PML risk, mortality, and morbidity compared to FGL or GA. Integrated modeling of long-term treatment risks and benefits informs stratified clinical decision-making and can support patient counseling on selection of first-line treatment options. © 2017 American Academy of Neurology.
Shi, Guo; Zhang, Shun-xiang
2013-03-01
To synthesize relevant data and to analyze the benefit-cost ratio on strategies related to preventing the maternal-infantile transmission of hepatitis B virus infection and to explore the optimal strategy. A decision tree model was constructed according to the strategies of hepatitis B immunization and a Markov model was conducted to simulate the complex disease progress after HBV infection. Parameters in the models were drawn from meta-analysis and information was collected from field study and review of literature. Economic evaluation was performed to calculate costs, benefit, and the benefit-cost ratio. Sensitivity analysis was also conducted and a tornado graph was drawn. In view of the current six possible strategies in preventing maternal-infantile transmission of hepatitis B virus infection, a multi-stage decision tree model was constructed to screen hepatitis B surface antigen (HBsAg) or screen for HBsAg then hepatitis B e antigen (HBeAg). Dose and the number of injections of HBIG and hepatitis B vaccine were taken into consideration in the model. All the strategies were considered to be cost-saving, while the strategy of screening for HBsAg and then offering hepatitis B vaccine of 10 µg×3 for all neonates with hepatitis B immunoglobulin (HBIG) of 100 IU×1 for the neonates born to mothers who tested positive for HBsAg appeared with most cost-saving. In the strategies, the benefit-cost ratio of using 100 IU HBIG was similar to 200 IU HBIG, and one shot of HBIG was superior to two shots. from sensitivity analysis suggested that the rates of immunization and the efficacy of the strategy in preventing maternal-infantile transmission were the main sensitive variables in the model. The passive-active immune-prophylaxis strategy that using 10 µg hepatitis B vaccine combined with 100 IU HBIG seemed to be the optimal strategy in preventing maternal-infantile transmission, while the rates of immunization and the efficacy of the strategy played the key roles in choosing the ideal strategy.
Burroughs, N J; Pillay, D; Mutimer, D
1999-01-01
Bayesian analysis using a virus dynamics model is demonstrated to facilitate hypothesis testing of patterns in clinical time-series. Our Markov chain Monte Carlo implementation demonstrates that the viraemia time-series observed in two sets of hepatitis B patients on antiviral (lamivudine) therapy, chronic carriers and liver transplant patients, are significantly different, overcoming clinical trial design differences that question the validity of non-parametric tests. We show that lamivudine-resistant mutants grow faster in transplant patients than in chronic carriers, which probably explains the differences in emergence times and failure rates between these two sets of patients. Incorporation of dynamic models into Bayesian parameter analysis is of general applicability in medical statistics. PMID:10643081
Perspective: Markov models for long-timescale biomolecular dynamics.
Schwantes, C R; McGibbon, R T; Pande, V S
2014-09-07
Molecular dynamics simulations have the potential to provide atomic-level detail and insight to important questions in chemical physics that cannot be observed in typical experiments. However, simply generating a long trajectory is insufficient, as researchers must be able to transform the data in a simulation trajectory into specific scientific insights. Although this analysis step has often been taken for granted, it deserves further attention as large-scale simulations become increasingly routine. In this perspective, we discuss the application of Markov models to the analysis of large-scale biomolecular simulations. We draw attention to recent improvements in the construction of these models as well as several important open issues. In addition, we highlight recent theoretical advances that pave the way for a new generation of models of molecular kinetics.
NASA Astrophysics Data System (ADS)
Rahman, P. A.; D'K Novikova Freyre Shavier, G.
2018-03-01
This scientific paper is devoted to the analysis of the mean time to data loss of redundant disk arrays RAID-6 with alternation of data considering different failure rates of disks both in normal state of the disk array and in degraded and rebuild states, and also nonzero time of the disk replacement. The reliability model developed by the authors on the basis of the Markov chain and obtained calculation formula for estimation of the mean time to data loss (MTTDL) of the RAID-6 disk arrays are also presented. At last, the technique of estimation of the initial reliability parameters and examples of calculation of the MTTDL of the RAID-6 disk arrays for the different numbers of disks are also given.
Al-Inany, Hesham G; Abou-Setta, Ahmed M; Aboulghar, Mohamed A; Mansour, Ragaa T; Serour, Gamal I
2006-02-01
Both cost and effectiveness should be considered conjointly to aid judgments about drug choice. Therefore, based on the results of a recent published meta-analysis, a Markov model was developed to conduct a cost-effectiveness analysis for estimation of the cost of an ongoing pregnancy in IVF/intracytoplasmic sperm injection (ICSI) cycles. In addition, Monte Carlo micro-simulation was used to examine the potential impact of assumptions and other uncertainties represented in the model. The results of the study reveal that the estimated average cost of an ongoing pregnancy is 13,946 Egyptian pounds (EGP), and 18,721 EGP for a human menopausal gonadotrophin (HMG) and rFSH cycle respectively. On performing a sensitivity analysis on cycle costs, it was demonstrated that the rFSH price should be 0.61 EGP/IU to be as cost-effective as HMG at the price of 0.64 EGP/IU (i.e. around 60% reduction in its current price). The difference in cost between HMG and rFSH in over 100,000 cycles would result in an additional 4565 ongoing pregnancies if HMG was used. Therefore, HMG was clearly more cost-effective than rFSH. The decision to adopt a more expensive, cost-ineffective treatment could result in a lower number of cycles of IVF/ICSI treatment undertaken, especially in the case of most developing countries.
spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains
NASA Astrophysics Data System (ADS)
Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo
2016-09-01
The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.
Kirsch, Florian
2015-01-01
Diabetes is the most expensive chronic disease; therefore, disease management programs (DMPs) were introduced. The aim of this review is to determine whether Markov models are adequate to evaluate the cost-effectiveness of complex interventions such as DMPs. Additionally, the quality of the models was evaluated using Philips and Caro quality appraisals. The five reviewed models incorporated the DMP into the model differently: two models integrated effectiveness rates derived from one clinical trial/meta-analysis and three models combined interventions from different sources into a DMP. The results range from cost savings and a QALY gain to costs of US$85,087 per QALY. The Spearman's rank coefficient assesses no correlation between the quality appraisals. With restrictions to the data selection process, Markov models are adequate to determine the cost-effectiveness of DMPs; however, to allow prioritization of medical services, more flexibility in the models is necessary to enable the evaluation of single additional interventions.
Monitoring volcano activity through Hidden Markov Model
NASA Astrophysics Data System (ADS)
Cassisi, C.; Montalto, P.; Prestifilippo, M.; Aliotta, M.; Cannata, A.; Patanè, D.
2013-12-01
During 2011-2013, Mt. Etna was mainly characterized by cyclic occurrences of lava fountains, totaling to 38 episodes. During this time interval Etna volcano's states (QUIET, PRE-FOUNTAIN, FOUNTAIN, POST-FOUNTAIN), whose automatic recognition is very useful for monitoring purposes, turned out to be strongly related to the trend of RMS (Root Mean Square) of the seismic signal recorded by stations close to the summit area. Since RMS time series behavior is considered to be stochastic, we can try to model the system generating its values, assuming to be a Markov process, by using Hidden Markov models (HMMs). HMMs are a powerful tool in modeling any time-varying series. HMMs analysis seeks to recover the sequence of hidden states from the observed emissions. In our framework, observed emissions are characters generated by the SAX (Symbolic Aggregate approXimation) technique, which maps RMS time series values with discrete literal emissions. The experiments show how it is possible to guess volcano states by means of HMMs and SAX.
On the Mathematical Consequences of Binning Spike Trains.
Cessac, Bruno; Le Ny, Arnaud; Löcherbach, Eva
2017-01-01
We initiate a mathematical analysis of hidden effects induced by binning spike trains of neurons. Assuming that the original spike train has been generated by a discrete Markov process, we show that binning generates a stochastic process that is no longer Markov but is instead a variable-length Markov chain (VLMC) with unbounded memory. We also show that the law of the binned raster is a Gibbs measure in the DLR (Dobrushin-Lanford-Ruelle) sense coined in mathematical statistical mechanics. This allows the derivation of several important consequences on statistical properties of binned spike trains. In particular, we introduce the DLR framework as a natural setting to mathematically formalize anticipation, that is, to tell "how good" our nervous system is at making predictions. In a probabilistic sense, this corresponds to condition a process by its future, and we discuss how binning may affect our conclusions on this ability. We finally comment on the possible consequences of binning in the detection of spurious phase transitions or in the detection of incorrect evidence of criticality.
Self-Organizing Hidden Markov Model Map (SOHMMM).
Ferles, Christos; Stafylopatis, Andreas
2013-12-01
A hybrid approach combining the Self-Organizing Map (SOM) and the Hidden Markov Model (HMM) is presented. The Self-Organizing Hidden Markov Model Map (SOHMMM) establishes a cross-section between the theoretic foundations and algorithmic realizations of its constituents. The respective architectures and learning methodologies are fused in an attempt to meet the increasing requirements imposed by the properties of deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and protein chain molecules. The fusion and synergy of the SOM unsupervised training and the HMM dynamic programming algorithms bring forth a novel on-line gradient descent unsupervised learning algorithm, which is fully integrated into the SOHMMM. Since the SOHMMM carries out probabilistic sequence analysis with little or no prior knowledge, it can have a variety of applications in clustering, dimensionality reduction and visualization of large-scale sequence spaces, and also, in sequence discrimination, search and classification. Two series of experiments based on artificial sequence data and splice junction gene sequences demonstrate the SOHMMM's characteristics and capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Surface Connectivity and Interocean Exchanges From Drifter-Based Transition Matrices
NASA Astrophysics Data System (ADS)
McAdam, Ronan; van Sebille, Erik
2018-01-01
Global surface transport in the ocean can be represented by using the observed trajectories of drifters to calculate probability distribution functions. The oceanographic applications of the Markov Chain approach to modeling include tracking of floating debris and water masses, globally and on yearly-to-centennial time scales. Here we analyze the error inherent with mapping trajectories onto a grid and the consequences for ocean transport modeling and detection of accumulation structures. A sensitivity analysis of Markov Chain parameters is performed in an idealized Stommel gyre and western boundary current as well as with observed ocean drifters, complementing previous studies on widespread floating debris accumulation. Focusing on two key areas of interocean exchange—the Agulhas system and the North Atlantic intergyre transport barrier—we assess the capacity of the Markov Chain methodology to detect surface connectivity and dynamic transport barriers. Finally, we extend the methodology's functionality to separate the geostrophic and nongeostrophic contributions to interocean exchange in these key regions.
A reward semi-Markov process with memory for wind speed modeling
NASA Astrophysics Data System (ADS)
Petroni, F.; D'Amico, G.; Prattico, F.
2012-04-01
The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [1] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [3], by using two models, first-order Markov chain with different number of states, and Weibull distribution. All this model use Markov chains to generate synthetic wind speed time series but the search for a better model is still open. Approaching this issue, we applied new models which are generalization of Markov models. More precisely we applied semi-Markov models to generate synthetic wind speed time series. The primary goal of this analysis is the study of the time history of the wind in order to assess its reliability as a source of power and to determine the associated storage levels required. In order to assess this issue we use a probabilistic model based on indexed semi-Markov process [4] to which a reward structure is attached. Our model is used to calculate the expected energy produced by a given turbine and its variability expressed by the variance of the process. Our results can be used to compare different wind farms based on their reward and also on the risk of missed production due to the intrinsic variability of the wind speed process. The model is used to generate synthetic time series for wind speed by means of Monte Carlo simulations and backtesting procedure is used to compare results on first and second oder moments of rewards between real and synthetic data. [1] A. Shamshad, M.A. Bawadi, W.M.W. Wan Hussin, T.A. Majid, S.A.M. Sanusi, First and second order Markov chain models for synthetic gen- eration of wind speed time series, Energy 30 (2005) 693-708. [2] H. Nfaoui, H. Essiarab, A.A.M. Sayigh, A stochastic Markov chain model for simulating wind speed time series at Tangiers, Morocco, Re- newable Energy 29 (2004) 1407-1418. [3] F. Youcef Ettoumi, H. Sauvageot, A.-E.-H. Adane, Statistical bivariate modeling of wind using first-order Markov chain and Weibull distribu- tion, Renewable Energy 28 (2003) 1787-1802. [4]F. Petroni, G. D'Amico, F. Prattico, Indexed semi-Markov process for wind speed modeling. To be submitted.
Interference effects of categorization on decision making.
Wang, Zheng; Busemeyer, Jerome R
2016-05-01
Many decision making tasks in life involve a categorization process, but the effects of categorization on subsequent decision making has rarely been studied. This issue was explored in three experiments (N=721), in which participants were shown a face stimulus on each trial and performed variations of categorization-decision tasks. On C-D trials, they categorized the stimulus and then made an action decision; on X-D trials, they were told the category and then made an action decision; on D-alone trials, they only made an action decision. An interference effect emerged in some of the conditions, such that the probability of an action on the D-alone trials (i.e., when there was no explicit categorization before the decision) differed from the total probability of the same action on the C-D or X-D trials (i.e., when there was explicit categorization before the decision). Interference effects are important because they indicate a violation of the classical law of total probability, which is assumed by many cognitive models. Across all three experiments, a complex pattern of interference effects systematically occurred for different types of stimuli and for different types of categorization-decision tasks. These interference effects present a challenge for traditional cognitive models, such as Markov and signal detection models, but a quantum cognition model, called the belief-action entanglement (BAE) model, predicted that these results could occur. The BAE model employs the quantum principles of superposition and entanglement to explain the psychological mechanisms underlying the puzzling interference effects. The model can be applied to many important and practical categorization-decision situations in life. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bharath, S..; Rajan, K. S.; Ramachandra, T. V.
2014-11-01
The land use changes in forested landscape are highly complex and dynamic, affected by the natural, socio-economic, cultural, political and other factors. The remote sensing (RS) and geographical information system (GIS) techniques coupled with multi-criteria evaluation functions such as Markov-cellular automata (CA-Markov) model helps in analysing intensity, extent and future forecasting of human activities affecting the terrestrial biosphere. Karwar taluk of Central Western Ghats in Karnataka state, India has seen rapid transitions in its forest cover due to various anthropogenic activities, primarily driven by major industrial activities. A study based on Landsat and IRS derived data along with CA-Markov method has helped in characterizing the patterns and trends of land use changes over a period of 2004-2013, expected transitions was predicted for a set of scenarios through 2013-2022. The analysis reveals the loss of pristine forest cover from 75.51% to 67.36% (1973 to 2013) and increase in agriculture land as well as built-up area of 8.65% (2013), causing impact on local flora and fauna. The other factors driving these changes are the aggregated level of demand for land, local and regional effects of land use activities such as deforestation, improper practices in expansion of agriculture and infrastructure development, deteriorating natural resources availability. The spatio temporal models helped in visualizing on-going changes apart from prediction of likely changes. The CA-Markov based analysis provides us insights into the localized changes impacting these regions and can be useful in developing appropriate mitigation management approaches based on the modelled future impacts. This necessitates immediate measures for minimizing the future impacts.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.
Analysis of a Multiprocessor Guidance Computer. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Maltach, E. G.
1969-01-01
The design of the next generation of spaceborne digital computers is described. It analyzes a possible multiprocessor computer configuration. For the analysis, a set of representative space computing tasks was abstracted from the Lunar Module Guidance Computer programs as executed during the lunar landing, from the Apollo program. This computer performs at this time about 24 concurrent functions, with iteration rates from 10 times per second to once every two seconds. These jobs were tabulated in a machine-independent form, and statistics of the overall job set were obtained. It was concluded, based on a comparison of simulation and Markov results, that the Markov process analysis is accurate in predicting overall trends and in configuration comparisons, but does not provide useful detailed information in specific situations. Using both types of analysis, it was determined that the job scheduling function is a critical one for efficiency of the multiprocessor. It is recommended that research into the area of automatic job scheduling be performed.
Reduced-order dynamic output feedback control of uncertain discrete-time Markov jump linear systems
NASA Astrophysics Data System (ADS)
Morais, Cecília F.; Braga, Márcio F.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.
2017-11-01
This paper deals with the problem of designing reduced-order robust dynamic output feedback controllers for discrete-time Markov jump linear systems (MJLS) with polytopic state space matrices and uncertain transition probabilities. Starting from a full order, mode-dependent and polynomially parameter-dependent dynamic output feedback controller, sufficient linear matrix inequality based conditions are provided for the existence of a robust reduced-order dynamic output feedback stabilising controller with complete, partial or none mode dependency assuring an upper bound to the ? or the ? norm of the closed-loop system. The main advantage of the proposed method when compared to the existing approaches is the fact that the dynamic controllers are exclusively expressed in terms of the decision variables of the problem. In other words, the matrices that define the controller realisation do not depend explicitly on the state space matrices associated with the modes of the MJLS. As a consequence, the method is specially suitable to handle order reduction or cluster availability constraints in the context of ? or ? dynamic output feedback control of discrete-time MJLS. Additionally, as illustrated by means of numerical examples, the proposed approach can provide less conservative results than other conditions in the literature.
Derivation of Markov processes that violate detailed balance
NASA Astrophysics Data System (ADS)
Lee, Julian
2018-03-01
Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.
Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert
2015-05-28
System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less
Applying Probabilistic Decision Models to Clinical Trial Design
Smith, Wade P; Phillips, Mark H
2018-01-01
Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075
Recommendation System for Adaptive Learning.
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Ying, Zhiliang
2018-01-01
An adaptive learning system aims at providing instruction tailored to the current status of a learner, differing from the traditional classroom experience. The latest advances in technology make adaptive learning possible, which has the potential to provide students with high-quality learning benefit at a low cost. A key component of an adaptive learning system is a recommendation system, which recommends the next material (video lectures, practices, and so on, on different skills) to the learner, based on the psychometric assessment results and possibly other individual characteristics. An important question then follows: How should recommendations be made? To answer this question, a mathematical framework is proposed that characterizes the recommendation process as a Markov decision problem, for which decisions are made based on the current knowledge of the learner and that of the learning materials. In particular, two plain vanilla systems are introduced, for which the optimal recommendation at each stage can be obtained analytically.
Identifying and correcting non-Markov states in peptide conformational dynamics
NASA Astrophysics Data System (ADS)
Nerukh, Dmitry; Jensen, Christian H.; Glen, Robert C.
2010-02-01
Conformational transitions in proteins define their biological activity and can be investigated in detail using the Markov state model. The fundamental assumption on the transitions between the states, their Markov property, is critical in this framework. We test this assumption by analyzing the transitions obtained directly from the dynamics of a molecular dynamics simulated peptide valine-proline-alanine-leucine and states defined phenomenologically using clustering in dihedral space. We find that the transitions are Markovian at the time scale of ≈50 ps and longer. However, at the time scale of 30-40 ps the dynamics loses its Markov property. Our methodology reveals the mechanism that leads to non-Markov behavior. It also provides a way of regrouping the conformations into new states that now possess the required Markov property of their dynamics.
Rudmik, Luke; Soler, Zachary M.; Mace, Jess C.; Schlosser, Rodney J.; Smith, Timothy L.
2014-01-01
Objective To evaluate the long-term cost-effectiveness of endoscopic sinus surgery (ESS) compared to continued medical therapy for patients with refractory chronic rhinosinusitis (CRS). Study Design Cohort-style Markov decision tree economic evaluation Methods The economic perspective was the US third party payer with a 30 year time horizon. The two comparative treatment strategies were: 1) ESS followed by appropriate postoperative medical therapy and 2) continued medical therapy alone. Primary outcome was the incremental cost per quality adjusted life year (QALY). Costs were discounted at a rate of 3.5% in the reference case. Multiple sensitivity analyses were performed including differing time-horizons, discounting scenarios, and a probabilistic sensitivity analysis (PSA). Results The reference case demonstrated that the ESS strategy cost a total of $48,838.38 and produced a total of 20.50 QALYs. The medical therapy alone strategy cost a total of $28,948.98 and produced a total of 17.13 QALYs. The incremental cost effectiveness ratio (ICER) for ESS versus medical therapy alone is $5,901.90 per QALY. The cost-effectiveness acceptability curve from the PSA demonstrated that there is 74% certainty that the ESS strategy is the most cost-effective decision for any willingness to pay threshold greater then $25,000. The time horizon analysis suggests that ESS becomes the cost-effective intervention within the 3rd year after surgery. Conclusion Results from this study suggest that employing an ESS treatment strategy is the most cost-effective intervention compared to continued medical therapy alone for the long-term management of patients with refractory CRS. PMID:25186499
Effectiveness of diagnostic strategies in suspected delayed cerebral ischemia: a decision analysis.
Rawal, Sapna; Barnett, Carolina; John-Baptiste, Ava; Thein, Hla-Hla; Krings, Timo; Rinkel, Gabriel J E
2015-01-01
Delayed cerebral ischemia (DCI) is a serious complication after aneurysmal subarachnoid hemorrhage. If DCI is suspected clinically, imaging methods designed to detect angiographic vasospasm or regional hypoperfusion are often used before instituting therapy. Uncertainty in the strength of the relationship between imaged vasospasm or perfusion deficits and DCI-related outcomes raises the question of whether imaging to select patients for therapy improves outcomes in clinical DCI. Decision analysis was performed using Markov models. Strategies were either to treat all patients immediately or to first undergo diagnostic testing by digital subtraction angiography or computed tomography angiography to assess for angiographic vasospasm, or computed tomography perfusion to assess for perfusion deficits. According to current practice guidelines, treatment consisted of induced hypertension. Outcomes were survival in terms of life-years and quality-adjusted life-years. When treatment was assumed to be ineffective in nonvasospasm patients, Treat All and digital subtraction angiography were equivalent strategies; when a moderate treatment effect was assumed in nonvasospasm patients, Treat All became the superior strategy. Treating all patients was also superior to selecting patients for treatment via computed tomography perfusion. One-way sensitivity analyses demonstrated that the models were robust; 2- and 3-way sensitivity analyses with variation of disease and treatment parameters reinforced dominance of the Treat All strategy. Imaging studies to test for the presence of angiographic vasospasm or perfusion deficits in patients with clinical DCI do not seem helpful in selecting which patients should undergo treatment and may not improve outcomes. Future directions include validating these results in prospective cohort studies. © 2014 American Heart Association, Inc.
Kunkle, Brian W.; Yoo, Changwon; Roy, Deodutta
2013-01-01
In this study we have identified key genes that are critical in development of astrocytic tumors. Meta-analysis of microarray studies which compared normal tissue to astrocytoma revealed a set of 646 differentially expressed genes in the majority of astrocytoma. Reverse engineering of these 646 genes using Bayesian network analysis produced a gene network for each grade of astrocytoma (Grade I–IV), and ‘key genes’ within each grade were identified. Genes found to be most influential to development of the highest grade of astrocytoma, Glioblastoma multiforme were: COL4A1, EGFR, BTF3, MPP2, RAB31, CDK4, CD99, ANXA2, TOP2A, and SERBP1. All of these genes were up-regulated, except MPP2 (down regulated). These 10 genes were able to predict tumor status with 96–100% confidence when using logistic regression, cross validation, and the support vector machine analysis. Markov genes interact with NFkβ, ERK, MAPK, VEGF, growth hormone and collagen to produce a network whose top biological functions are cancer, neurological disease, and cellular movement. Three of the 10 genes - EGFR, COL4A1, and CDK4, in particular, seemed to be potential ‘hubs of activity’. Modified expression of these 10 Markov Blanket genes increases lifetime risk of developing glioblastoma compared to the normal population. The glioblastoma risk estimates were dramatically increased with joint effects of 4 or more than 4 Markov Blanket genes. Joint interaction effects of 4, 5, 6, 7, 8, 9 or 10 Markov Blanket genes produced 9, 13, 20.9, 26.7, 52.8, 53.2, 78.1 or 85.9%, respectively, increase in lifetime risk of developing glioblastoma compared to normal population. In summary, it appears that modified expression of several ‘key genes’ may be required for the development of glioblastoma. Further studies are needed to validate these ‘key genes’ as useful tools for early detection and novel therapeutic options for these tumors. PMID:23737970
Reliability modelling and analysis of a multi-state element based on a dynamic Bayesian network
Xu, Tingxue; Gu, Junyuan; Dong, Qi; Fu, Linyu
2018-01-01
This paper presents a quantitative reliability modelling and analysis method for multi-state elements based on a combination of the Markov process and a dynamic Bayesian network (DBN), taking perfect repair, imperfect repair and condition-based maintenance (CBM) into consideration. The Markov models of elements without repair and under CBM are established, and an absorbing set is introduced to determine the reliability of the repairable element. According to the state-transition relations between the states determined by the Markov process, a DBN model is built. In addition, its parameters for series and parallel systems, namely, conditional probability tables, can be calculated by referring to the conditional degradation probabilities. Finally, the power of a control unit in a failure model is used as an example. A dynamic fault tree (DFT) is translated into a Bayesian network model, and subsequently extended to a DBN. The results show the state probabilities of an element and the system without repair, with perfect and imperfect repair, and under CBM, with an absorbing set plotted by differential equations and verified. Through referring forward, the reliability value of the control unit is determined in different kinds of modes. Finally, weak nodes are noted in the control unit. PMID:29765629
Multilayer Markov Random Field models for change detection in optical remote sensing images
NASA Astrophysics Data System (ADS)
Benedek, Csaba; Shadaydeh, Maha; Kato, Zoltan; Szirányi, Tamás; Zerubia, Josiane
2015-09-01
In this paper, we give a comparative study on three Multilayer Markov Random Field (MRF) based solutions proposed for change detection in optical remote sensing images, called Multicue MRF, Conditional Mixed Markov model, and Fusion MRF. Our purposes are twofold. On one hand, we highlight the significance of the focused model family and we set them against various state-of-the-art approaches through a thematic analysis and quantitative tests. We discuss the advantages and drawbacks of class comparison vs. direct approaches, usage of training data, various targeted application fields and different ways of Ground Truth generation, meantime informing the Reader in which roles the Multilayer MRFs can be efficiently applied. On the other hand we also emphasize the differences between the three focused models at various levels, considering the model structures, feature extraction, layer interpretation, change concept definition, parameter tuning and performance. We provide qualitative and quantitative comparison results using principally a publicly available change detection database which contains aerial image pairs and Ground Truth change masks. We conclude that the discussed models are competitive against alternative state-of-the-art solutions, if one uses them as pre-processing filters in multitemporal optical image analysis. In addition, they cover together a large range of applications, considering the different usage options of the three approaches.
Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)
NASA Astrophysics Data System (ADS)
Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.
2018-05-01
A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.
Risk-sensitive reinforcement learning.
Shen, Yun; Tobia, Michael J; Sommer, Tobias; Obermayer, Klaus
2014-07-01
We derive a family of risk-sensitive reinforcement learning methods for agents, who face sequential decision-making tasks in uncertain environments. By applying a utility function to the temporal difference (TD) error, nonlinear transformations are effectively applied not only to the received rewards but also to the true transition probabilities of the underlying Markov decision process. When appropriate utility functions are chosen, the agents' behaviors express key features of human behavior as predicted by prospect theory (Kahneman & Tversky, 1979 ), for example, different risk preferences for gains and losses, as well as the shape of subjective probability curves. We derive a risk-sensitive Q-learning algorithm, which is necessary for modeling human behavior when transition probabilities are unknown, and prove its convergence. As a proof of principle for the applicability of the new framework, we apply it to quantify human behavior in a sequential investment task. We find that the risk-sensitive variant provides a significantly better fit to the behavioral data and that it leads to an interpretation of the subject's responses that is indeed consistent with prospect theory. The analysis of simultaneously measured fMRI signals shows a significant correlation of the risk-sensitive TD error with BOLD signal change in the ventral striatum. In addition we find a significant correlation of the risk-sensitive Q-values with neural activity in the striatum, cingulate cortex, and insula that is not present if standard Q-values are used.
What is the Effect of Interannual Hydroclimatic Variability on Water Supply Reservoir Operations?
NASA Astrophysics Data System (ADS)
Galelli, S.; Turner, S. W. D.
2015-12-01
Rather than deriving from a single distribution and uniform persistence structure, hydroclimatic data exhibit significant trends and shifts in their mean, variance, and lagged correlation through time. Consequentially, observed and reconstructed streamflow records are often characterized by features of interannual variability, including long-term persistence and prolonged droughts. This study examines the effect of these features on the operating performance of water supply reservoirs. We develop a Stochastic Dynamic Programming (SDP) model that can incorporate a regime-shifting climate variable. We then compare the performance of operating policies—designed with and without climate variable—to quantify the contribution of interannual variability to standard policy sub-optimality. The approach uses a discrete-time Markov chain to partition the reservoir inflow time series into small number of 'hidden' climate states. Each state defines a distinct set of inflow transition probability matrices, which are used by the SDP model to condition the release decisions on the reservoir storage, current-period inflow and hidden climate state. The experimental analysis is carried out on 99 hypothetical water supply reservoirs fed from pristine catchments in Australia—all impacted by the Millennium drought. Results show that interannual hydroclimatic variability is a major cause of sub-optimal hedging decisions. The practical import is that conventional optimization methods may misguide operators, particularly in regions susceptible to multi-year droughts.
Ciampi, Antonio; Dyachenko, Alina; Cole, Martin; McCusker, Jane
2011-12-01
The study of mental disorders in the elderly presents substantial challenges due to population heterogeneity, coexistence of different mental disorders, and diagnostic uncertainty. While reliable tools have been developed to collect relevant data, new approaches to study design and analysis are needed. We focus on a new analytic approach. Our framework is based on latent class analysis and hidden Markov chains. From repeated measurements of a multivariate disease index, we extract the notion of underlying state of a patient at a time point. The course of the disorder is then a sequence of transitions among states. States and transitions are not observable; however, the probability of being in a state at a time point, and the transition probabilities from one state to another over time can be estimated. Data from 444 patients with and without diagnosis of delirium and dementia were available from a previous study. The Delirium Index was measured at diagnosis, and at 2 and 6 months from diagnosis. Four latent classes were identified: fairly healthy, moderately ill, clearly sick, and very sick. Dementia and delirium could not be separated on the basis of these data alone. Indeed, as the probability of delirium increased, so did the probability of decline of mental functions. Eight most probable courses were identified, including good and poor stable courses, and courses exhibiting various patterns of improvement. Latent class analysis and hidden Markov chains offer a promising tool for studying mental disorders in the elderly. Its use may show its full potential as new data become available.
Chetty, Mersha; Kenworthy, James J; Langham, Sue; Walker, Andrew; Dunlop, William C N
2017-02-24
Opioid dependence is a chronic condition with substantial health, economic and social costs. The study objective was to conduct a systematic review of published health-economic models of opioid agonist therapy for non-prescription opioid dependence, to review the different modelling approaches identified, and to inform future modelling studies. Literature searches were conducted in March 2015 in eight electronic databases, supplemented by hand-searching reference lists and searches on six National Health Technology Assessment Agency websites. Studies were included if they: investigated populations that were dependent on non-prescription opioids and were receiving opioid agonist or maintenance therapy; compared any pharmacological maintenance intervention with any other maintenance regimen (including placebo or no treatment); and were health-economic models of any type. A total of 18 unique models were included. These used a range of modelling approaches, including Markov models (n = 4), decision tree with Monte Carlo simulations (n = 3), decision analysis (n = 3), dynamic transmission models (n = 3), decision tree (n = 1), cohort simulation (n = 1), Bayesian (n = 1), and Monte Carlo simulations (n = 2). Time horizons ranged from 6 months to lifetime. The most common evaluation was cost-utility analysis reporting cost per quality-adjusted life-year (n = 11), followed by cost-effectiveness analysis (n = 4), budget-impact analysis/cost comparison (n = 2) and cost-benefit analysis (n = 1). Most studies took the healthcare provider's perspective. Only a few models included some wider societal costs, such as productivity loss or costs of drug-related crime, disorder and antisocial behaviour. Costs to individuals and impacts on family and social networks were not included in any model. A relatively small number of studies of varying quality were found. Strengths and weaknesses relating to model structure, inputs and approach were identified across all the studies. There was no indication of a single standard emerging as a preferred approach. Most studies omitted societal costs, an important issue since the implications of drug abuse extend widely beyond healthcare services. Nevertheless, elements from previous models could together form a framework for future economic evaluations in opioid agonist therapy including all relevant costs and outcomes. This could more adequately support decision-making and policy development for treatment of non-prescription opioid dependence.
2012-01-01
Background Several methodological issues with non-randomized comparative clinical studies have been raised, one of which is whether the methods used can adequately identify uncertainties that evolve dynamically with time in real-world systems. The objective of this study is to compare the effectiveness of different combinations of Traditional Chinese Medicine (TCM) treatments and combinations of TCM and Western medicine interventions in patients with acute ischemic stroke (AIS) by using Markov decision process (MDP) theory. MDP theory appears to be a promising new method for use in comparative effectiveness research. Methods The electronic health records (EHR) of patients with AIS hospitalized at the 2nd Affiliated Hospital of Guangzhou University of Chinese Medicine between May 2005 and July 2008 were collected. Each record was portioned into two "state-action-reward" stages divided by three time points: the first, third, and last day of hospital stay. We used the well-developed optimality technique in MDP theory with the finite horizon criterion to make the dynamic comparison of different treatment combinations. Results A total of 1504 records with a primary diagnosis of AIS were identified. Only states with more than 10 (including 10) patients' information were included, which gave 960 records to be enrolled in the MDP model. Optimal combinations were obtained for 30 types of patient condition. Conclusion MDP theory makes it possible to dynamically compare the effectiveness of different combinations of treatments. However, the optimal interventions obtained by the MDP theory here require further validation in clinical practice. Further exploratory studies with MDP theory in other areas in which complex interventions are common would be worthwhile. PMID:22400712
Making do with less: Must sparse data preclude informed harvest strategies for European waterbirds?
Johnson, Fred A.; Alhainen, Mikko; Fox, Anthony D.; Madsen, Jesper; Guillemain, Matthieu
2018-01-01
The demography of many European waterbirds is not well understood because most countries have conducted little monitoring and assessment, and coordination among countries on waterbird management has little precedent. Yet intergovernmental treaties now mandate the use of sustainable, adaptive harvest strategies, whose development is challenged by a paucity of demographic information. In this study, we explore how a combination of allometric relationships, fragmentary monitoring and research information, and expert judgment can be used to estimate the parameters of a theta-logistic population model, which in turn can be used in a Markov decision process to derive optimal harvesting strategies. We show how to account for considerable parametric uncertainty, as well as for different management objectives. We illustrate our methodology with a poorly understood population of taiga bean geese (Anser fabalis fabalis), which is a popular game bird in Fennoscandia. Our results for taiga bean geese suggest that they may have demographic rates similar to other, well-studied species of geese, and our model-based predictions of population size are consistent with the limited monitoring information available. Importantly, we found that by using a Markov decision process, a simple scalar population model may be sufficient to guide harvest management of this species, even if its demography is age-structured. Finally, we demonstrated how two different management objectives can lead to very different optimal harvesting strategies, and how conflicting objectives may be traded off with each other. This approach will have broad application for European waterbirds by providing preliminary estimates of key demographic parameters, by providing insights into the monitoring and research activities needed to corroborate those estimates, and by producing harvest management strategies that are optimal with respect to the managers’ objectives, options, and available demographic information.
Affective State Level Recognition in Naturalistic Facial and Vocal Expressions.
Meng, Hongying; Bianchi-Berthouze, Nadia
2014-03-01
Naturalistic affective expressions change at a rate much slower than the typical rate at which video or audio is recorded. This increases the probability that consecutive recorded instants of expressions represent the same affective content. In this paper, we exploit such a relationship to improve the recognition performance of continuous naturalistic affective expressions. Using datasets of naturalistic affective expressions (AVEC 2011 audio and video dataset, PAINFUL video dataset) continuously labeled over time and over different dimensions, we analyze the transitions between levels of those dimensions (e.g., transitions in pain intensity level). We use an information theory approach to show that the transitions occur very slowly and hence suggest modeling them as first-order Markov models. The dimension levels are considered to be the hidden states in the Hidden Markov Model (HMM) framework. Their discrete transition and emission matrices are trained by using the labels provided with the training set. The recognition problem is converted into a best path-finding problem to obtain the best hidden states sequence in HMMs. This is a key difference from previous use of HMMs as classifiers. Modeling of the transitions between dimension levels is integrated in a multistage approach, where the first level performs a mapping between the affective expression features and a soft decision value (e.g., an affective dimension level), and further classification stages are modeled as HMMs that refine that mapping by taking into account the temporal relationships between the output decision labels. The experimental results for each of the unimodal datasets show overall performance to be significantly above that of a standard classification system that does not take into account temporal relationships. In particular, the results on the AVEC 2011 audio dataset outperform all other systems presented at the international competition.
Dependability and performability analysis
NASA Technical Reports Server (NTRS)
Trivedi, Kishor S.; Ciardo, Gianfranco; Malhotra, Manish; Sahner, Robin A.
1993-01-01
Several practical issues regarding specifications and solution of dependability and performability models are discussed. Model types with and without rewards are compared. Continuous-time Markov chains (CTMC's) are compared with (continuous-time) Markov reward models (MRM's) and generalized stochastic Petri nets (GSPN's) are compared with stochastic reward nets (SRN's). It is shown that reward-based models could lead to more concise model specifications and solution of a variety of new measures. With respect to the solution of dependability and performability models, three practical issues were identified: largeness, stiffness, and non-exponentiality, and a variety of approaches are discussed to deal with them, including some of the latest research efforts.
A descriptive model of resting-state networks using Markov chains.
Xie, H; Pal, R; Mitra, S
2016-08-01
Resting-state functional connectivity (RSFC) studies considering pairwise linear correlations have attracted great interests while the underlying functional network structure still remains poorly understood. To further our understanding of RSFC, this paper presents an analysis of the resting-state networks (RSNs) based on the steady-state distributions and provides a novel angle to investigate the RSFC of multiple functional nodes. This paper evaluates the consistency of two networks based on the Hellinger distance between the steady-state distributions of the inferred Markov chain models. The results show that generated steady-state distributions of default mode network have higher consistency across subjects than random nodes from various RSNs.
Under-reported data analysis with INAR-hidden Markov chains.
Fernández-Fontelo, Amanda; Cabaña, Alejandra; Puig, Pedro; Moriña, David
2016-11-20
In this work, we deal with correlated under-reported data through INAR(1)-hidden Markov chain models. These models are very flexible and can be identified through its autocorrelation function, which has a very simple form. A naïve method of parameter estimation is proposed, jointly with the maximum likelihood method based on a revised version of the forward algorithm. The most-probable unobserved time series is reconstructed by means of the Viterbi algorithm. Several examples of application in the field of public health are discussed illustrating the utility of the models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains.
Reich, W; Scheuermann, G
2012-12-01
Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.
Ferles, Christos; Beaufort, William-Scott; Ferle, Vanessa
2017-01-01
The present study devises mapping methodologies and projection techniques that visualize and demonstrate biological sequence data clustering results. The Sequence Data Density Display (SDDD) and Sequence Likelihood Projection (SLP) visualizations represent the input symbolical sequences in a lower-dimensional space in such a way that the clusters and relations of data elements are depicted graphically. Both operate in combination/synergy with the Self-Organizing Hidden Markov Model Map (SOHMMM). The resulting unified framework is in position to analyze automatically and directly raw sequence data. This analysis is carried out with little, or even complete absence of, prior information/domain knowledge.
NASA Astrophysics Data System (ADS)
Zheng, Y.; Han, F.; Wu, B.
2013-12-01
Process-based, spatially distributed and dynamic models provide desirable resolutions to watershed-scale water management. However, their reliability in solving real management problems has been seriously questioned, since the model simulation usually involves significant uncertainty with complicated origins. Uncertainty analysis (UA) for complex hydrological models has been a hot topic in the past decade, and a variety of UA approaches have been developed, but mostly in a theoretical setting. Whether and how a UA could benefit real management decisions remains to be critical questions. We have conducted a series of studies to investigate the applicability of classic approaches, such as GLUE and Markov Chain Monte Carlo (MCMC) methods, in real management settings, unravel the difficulties encountered by such methods, and tailor the methods to better serve the management. Frameworks and new algorithms, such as Probabilistic Collocation Method (PCM)-based approaches, were also proposed for specific management issues. This presentation summarize our past and ongoing studies on the role of UA in real water management. Challenges and potential strategies to bridge the gap between UA for complex models and decision-making for management will be discussed. Future directions for the research in this field will also be suggested. Two common water management settings were examined. One is the Total Maximum Daily Loads (TMDLs) management for surface water quality protection. The other is integrated water resources management for watershed sustainability. For the first setting, nutrients and pesticides TMDLs in the Newport Bay Watershed (Orange Country, California, USA) were discussed. It is a highly urbanized region with a semi-arid Mediterranean climate, typical of the western U.S. For the second setting, the water resources management in the Zhangye Basin (the midstream part of Heihe Baisn, China), where the famous 'Silk Road' came through, was investigated. The Zhangye Basin has a Gobi-oasis system typical of the western China, with extensive agriculture in its oasis.
Should patients with Björk-Shiley valves undergo prophylactic replacement?
Birkmeyer, J D; Marrin, C A; O'Connor, G T
1992-08-29
About 85,000 patients have undergone replacement of diseased heart valves with prosthetic Björk-Shiley convexo-concave (CC) valves. These valves are prone to fracture of the outlet strut, which leads to acute valve failure that is usually fatal. Should patients with these valves undergo prophylactic replacement to avoid fracture? The incidence of strut fracture varies between 0% and 1.5% per year, depending on valve opening angle (60 degrees or 70 degrees), diameter (less than 29 mm or greater than or equal to 29 mm), and location (aortic or mitral). Other factors include the patient's life expectancy and the expected morbidity and mortality associated with reoperation. We have used decision analysis to identify the patients most likely to benefit from prophylactic reoperation. The incidence of outlet strut fracture was estimated from the data of three large studies on CC valves, and stratified by opening angle, diameter, and location. A Markov decision analysis model was used to estimate life expectancy for patients undergoing prophylactic valve replacement and for those not undergoing reoperation. Prophylactic valve replacement does not benefit patients with CC valves that have low strut fracture risks (60 degrees aortic valves and less than 29 mm, 60 degrees mitral valves). For most patients with CC valves that have high strut fracture risks (greater than or equal to 29 mm, 70 degrees CC), prophylactic valve replacement increases life expectancy. However, elderly patients with such valves benefit from prophylactic reoperation only if the risk of operative mortality is low. Patient age and operative risk are most important in recommendations for patients with CC valves that have intermediate strut fracture risks (less than 29 mm, 70 degrees valves and greater than or equal to 29 mm, 60 degrees mitral valves). For all patients and their doctors facing the difficult decision on whether to replace CC valves, individual estimates of operative mortality risk that take account of both patient-specific and institution-specific factors are essential.
Micieli, Andrew; Wijeysundera, Harindra C; Qiu, Feng; Atzema, Clare L; Singh, Sheldon M
2016-04-01
Percutaneous left atrial appendage occlusion (LAAO) is a nonpharmacologic approach for stroke prevention in nonvalvular atrial fibrillation (NVAF). No direct comparisons to novel oral anticoagulants (OACs) exists, limiting decision making on the optimal strategy for stroke prevention in NVAF patients. Addressing this gap in knowledge is timely given the recent debate by the US Food and Drug Administration regarding the effectiveness of LAAO. To assess the cost-effectiveness of LAAO and novel OACs relative to warfarin in patients with new-onset NVAF without contraindications to OAC. A cost-utility analysis using a patient-level Markov micro-simulation decision analytic model was undertaken to determine the lifetime costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratio (ICER) of LAAO and all novel OACs relative to warfarin. Effectiveness and utility data were obtained from the published literature and cost from the Ontario Drug Benefits Formulary and Case Costing Initiative. Warfarin had the lowest discounted QALY (5.13 QALYs), followed by dabigatran (5.18 QALYs), rivaroxaban and LAAO (5.21 QALYs), and apixaban (5.25 QALYs). The average discounted lifetime costs were $15 776 for warfarin, $18 280 for rivaroxaban, $19 156 for apixaban, $20 794 for dabigatran, and $21 789 for LAAO. Apixaban dominated dabigatran and LAAO and demonstrated extended dominance over rivaroxaban. The ICER for apixaban relative to warfarin was $28 167/QALY. Apixaban was preferred in 40.2% of simulations at a willingness-to-pay threshold of $50 000/QALY. Assumptions regarding clinical and methodological differences between published studies of each therapy were minimized. Apixaban is the most cost-effective therapy for stroke prevention in patients with new-onset NVAF without contraindications to OAC. Uncertainty around this conclusion exists, highlighting the need for further research. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Graham, Eleanor; Cuore Collaboration
2017-09-01
The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.
NASA Astrophysics Data System (ADS)
Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.
2017-07-01
Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-11-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.
Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions
NASA Astrophysics Data System (ADS)
Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G.; Hummer, Gerhard
2014-09-01
Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.
Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G., E-mail: yannis@princeton.edu, E-mail: gerhard.hummer@biophys.mpg.de
Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlapmore » with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space.« less
Long-range memory and non-Markov statistical effects in human sensorimotor coordination
NASA Astrophysics Data System (ADS)
M. Yulmetyev, Renat; Emelyanova, Natalya; Hänggi, Peter; Gafarov, Fail; Prokhorov, Alexander
2002-12-01
In this paper, the non-Markov statistical processes and long-range memory effects in human sensorimotor coordination are investigated. The theoretical basis of this study is the statistical theory of non-stationary discrete non-Markov processes in complex systems (Phys. Rev. E 62, 6178 (2000)). The human sensorimotor coordination was experimentally studied by means of standard dynamical tapping test on the group of 32 young peoples with tap numbers up to 400. This test was carried out separately for the right and the left hand according to the degree of domination of each brain hemisphere. The numerical analysis of the experimental results was made with the help of power spectra of the initial time correlation function, the memory functions of low orders and the first three points of the statistical spectrum of non-Markovity parameter. Our observations demonstrate, that with the regard to results of the standard dynamic tapping-test it is possible to divide all examinees into five different dynamic types. We have introduced the conflict coefficient to estimate quantitatively the order-disorder effects underlying life systems. The last one reflects the existence of disbalance between the nervous and the motor human coordination. The suggested classification of the neurophysiological activity represents the dynamic generalization of the well-known neuropsychological types and provides the new approach in a modern neuropsychology.
Diffusion maps, clustering and fuzzy Markov modeling in peptide folding transitions
Nedialkova, Lilia V.; Amat, Miguel A.; Kevrekidis, Ioannis G.; Hummer, Gerhard
2014-01-01
Using the helix-coil transitions of alanine pentapeptide as an illustrative example, we demonstrate the use of diffusion maps in the analysis of molecular dynamics simulation trajectories. Diffusion maps and other nonlinear data-mining techniques provide powerful tools to visualize the distribution of structures in conformation space. The resulting low-dimensional representations help in partitioning conformation space, and in constructing Markov state models that capture the conformational dynamics. In an initial step, we use diffusion maps to reduce the dimensionality of the conformational dynamics of Ala5. The resulting pretreated data are then used in a clustering step. The identified clusters show excellent overlap with clusters obtained previously by using the backbone dihedral angles as input, with small—but nontrivial—differences reflecting torsional degrees of freedom ignored in the earlier approach. We then construct a Markov state model describing the conformational dynamics in terms of a discrete-time random walk between the clusters. We show that by combining fuzzy C-means clustering with a transition-based assignment of states, we can construct robust Markov state models. This state-assignment procedure suppresses short-time memory effects that result from the non-Markovianity of the dynamics projected onto the space of clusters. In a comparison with previous work, we demonstrate how manifold learning techniques may complement and enhance informed intuition commonly used to construct reduced descriptions of the dynamics in molecular conformation space. PMID:25240340
Modeling of dialogue regimes of distance robot control
NASA Astrophysics Data System (ADS)
Larkin, E. V.; Privalov, A. N.
2017-02-01
Process of distance control of mobile robots is investigated. Petri-Markov net for modeling of dialogue regime is worked out. It is shown, that sequence of operations of next subjects: a human operator, a dialogue computer and an onboard computer may be simulated with use the theory of semi-Markov processes. From the semi-Markov process of the general form Markov process was obtained, which includes only states of transaction generation. It is shown, that a real transaction flow is the result of «concurrency» in states of Markov process. Iteration procedure for evaluation of transaction flow parameters, which takes into account effect of «concurrency», is proposed.
Le, P; Martinez, K A; Pappas, M A; Rothberg, M B
2017-06-01
Essentials Low risk patients don't require venous thromboembolism (VTE) prophylaxis; low risk is unquantified. We used a Markov model to estimate the risk threshold for VTE prophylaxis in medical inpatients. Prophylaxis was cost-effective for an average medical patient with a VTE risk of ≥ 1.0%. VTE prophylaxis can be personalized based on patient risk and age/life expectancy. Background Venous thromboembolism (VTE) is a common preventable condition in medical inpatients. Thromboprophylaxis is recommended for inpatients who are not at low risk of VTE, but no specific risk threshold for prophylaxis has been defined. Objective To determine a threshold for prophylaxis based on risk of VTE. Patients/Methods We constructed a decision model with a decision-tree following patients for 3 months after hospitalization, and a lifetime Markov model with 3-month cycles. The model tracked symptomatic deep vein thromboses and pulmonary emboli, bleeding events and heparin-induced thrombocytopenia. Long-term complications included recurrent VTE, post-thrombotic syndrome and pulmonary hypertension. For the base case, we considered medical inpatients aged 66 years, having a life expectancy of 13.5 years, VTE risk of 1.4% and bleeding risk of 2.7%. Patients received enoxaparin 40 mg day -1 for prophylaxis. Results Assuming a willingness-to-pay (WTP) threshold of $100 000/ quality-adjusted life year (QALY), prophylaxis was indicated for an average medical inpatient with a VTE risk of ≥ 1.0% up to 3 months after hospitalization. For the average patient, prophylaxis was not indicated when the bleeding risk was > 8.1%, the patient's age was > 73.4 years or the cost of enoxaparin exceeded $60/dose. If VTE risk was < 0.26% or bleeding risk was > 19%, the risks of prophylaxis outweighed benefits. The prophylaxis threshold was relatively insensitive to low-molecular-weight heparin cost and bleeding risk, but very sensitive to patient age and life expectancy. Conclusions The decision to offer prophylaxis should be personalized based on patient VTE risk, age and life expectancy. At a WTP of $100 000/QALY, prophylaxis is not warranted for most patients with a 3-month VTE risk below 1.0%. © 2017 International Society on Thrombosis and Haemostasis.
Algorithms for Discovery of Multiple Markov Boundaries
Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F.
2013-01-01
Algorithms for Markov boundary discovery from data constitute an important recent development in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight on local causal structure. Over the last decade many sound algorithms have been proposed to identify a single Markov boundary of the response variable. Even though faithful distributions and, more broadly, distributions that satisfy the intersection property always have a single Markov boundary, other distributions/data sets may have multiple Markov boundaries of the response variable. The latter distributions/data sets are common in practical data-analytic applications, and there are several reasons why it is important to induce multiple Markov boundaries from such data. However, there are currently no sound and efficient algorithms that can accomplish this task. This paper describes a family of algorithms TIE* that can discover all Markov boundaries in a distribution. The broad applicability as well as efficiency of the new algorithmic family is demonstrated in an extensive benchmarking study that involved comparison with 26 state-of-the-art algorithms/variants in 15 data sets from a diversity of application domains. PMID:25285052
Probability distributions for Markov chain based quantum walks
NASA Astrophysics Data System (ADS)
Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.
2018-01-01
We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.
Hatziandreu, E J; Brown, R E; Revicki, D A; Turner, R; Martindale, J; Levine, S; Siegel, J E
1994-03-01
The objective of this study was to model, for patients at risk of recurrent depression, the cost-utility of maintenance therapy with sertraline compared with treatment of acute episodes with dothiepin ('episodic treatment'). Using clinical decision analysis techniques, a Markov state-transition model was constructed to estimate the lifetime costs and quality-adjusted life-years (QALYs) of the 2 therapeutic strategies. The model follows 2 cohorts of 35-year-old women at high risk for recurrent depression over their lifetimes. Model construction and relevant data (probabilities) for performing the analysis were based on existing clinical knowledge. Two physician panels were used to obtain estimates of recurrence probabilities not available in the literature, health utilities, and resource consumption. Costs were obtained from published sources. The baseline analysis showed that it costs 2172 British pounds sterling ($US3692, 1991 currency) to save an additional QALY with sertraline maintenance treatment. Sensitivity analysis showed that the incremental cost-utility ratio ranged from 557 British pounds sterling to 5260 British pounds sterling per QALY. Overall, the resulting ratios are considered to be well within the range of cost-utility ratios that support the adoption and appropriate utilisation of a technology. Based on the study assumptions, long term maintenance treatment with sertraline appears to be clinically and economically justified choice for patients at high risk of recurrent depression.
Dynamic Modeling Using MCSim and R (SOT 2016 Biological Modeling Webinar Series)
MCSim is a stand-alone software package for simulating and analyzing dynamic models, with a focus on Bayesian analysis using Markov Chain Monte Carlo. While it is an extremely powerful package, it is somewhat inflexible, and offers only a limited range of analysis options, with n...
Kostyalik, Diána; Vas, Szilvia; Kátai, Zita; Kitka, Tamás; Gyertyán, István; Bagdy, Gyorgy; Tóthfalusi, László
2014-11-19
Shortened rapid eye movement (REM) sleep latency and increased REM sleep amount are presumed biological markers of depression. These sleep alterations are also observable in several animal models of depression as well as during the rebound sleep after selective REM sleep deprivation (RD). Furthermore, REM sleep fragmentation is typically associated with stress procedures and anxiety. The selective serotonin reuptake inhibitor (SSRI) antidepressants reduce REM sleep time and increase REM latency after acute dosing in normal condition and even during REM rebound following RD. However, their therapeutic outcome evolves only after weeks of treatment, and the effects of chronic treatment in REM-deprived animals have not been studied yet. Chronic escitalopram- (10 mg/kg/day, osmotic minipump for 24 days) or vehicle-treated rats were subjected to a 3-day-long RD on day 21 using the flower pot procedure or kept in home cage. On day 24, fronto-parietal electroencephalogram, electromyogram and motility were recorded in the first 2 h of the passive phase. The observed sleep patterns were characterized applying standard sleep metrics, by modelling the transitions between sleep phases using Markov chains and by spectral analysis. Based on Markov chain analysis, chronic escitalopram treatment attenuated the REM sleep fragmentation [accelerated transition rates between REM and non-REM (NREM) stages, decreased REM sleep residence time between two transitions] during the rebound sleep. Additionally, the antidepressant avoided the frequent awakenings during the first 30 min of recovery period. The spectral analysis showed that the SSRI prevented the RD-caused elevation in theta (5-9 Hz) power during slow-wave sleep. Conversely, based on the aggregate sleep metrics, escitalopram had only moderate effects and it did not significantly attenuate the REM rebound after RD. In conclusion, chronic SSRI treatment is capable of reducing several effects on sleep which might be the consequence of the sub-chronic stress caused by the flower pot method. These data might support the antidepressant activity of SSRIs, and may allude that investigating the rebound period following the flower pot protocol could be useful to detect antidepressant drug response. Markov analysis is a suitable method to study the sleep pattern.
2013-01-01
Background Multiple sclerosis (MS) is a highly debilitating immune mediated disorder and the second most common cause of neurological disability in young and middle-aged adults. Iran is amongst high MS prevalence countries (50/100,000). Economic burden of MS is a topic of important deliberation in economic evaluations study. Therefore determining of cost-effectiveness interferon beta (INF β) and their copied biopharmaceuticals (CBPs) and biosimilars products is significant issue for assessment of affordability in Lower-middle-income countries (LMICs). Methods A literature-based Markov model was developed to assess the cost-effectiveness of three INF βs products compared with placebo for managing a hypothetical cohort of patients diagnosed with relapsing remitting MS (RRMS) in Iran from a societal perspective. Health states were based on the Kurtzke Expanded Disability Status Scale (EDSS). Disease progression transition probabilities for symptom management and INF β therapies were obtained from natural history studies and multicenter randomized controlled trials and their long term follow up for RRMS and secondary progressive MS (SPMS). A cross sectional study has been developed to evaluate cost and utility. Transitions among health states occurred in 2-years cycles for fifteen cycles and switching to other therapies was allowed. Calculations of costs and utilities were established by attachment of decision trees to the overall model. The incremental cost effectiveness ratio (ICER) of cost/quality adjusted life year (QALY) for all available INF β products (brands, biosimilars and CBPs) were considered. Both costs and utilities were discounted. Sensitivity analyses were done to assess robustness of model. Results ICER for Avonex, Rebif and Betaferon was 18712, 11832, 15768 US Dollars ($) respectively when utility attained from literature review has been considered. ICER for available CBPs and biosimilars in Iran was $847, $6964 and $11913. Conclusions The Markov pharmacoeconomics model determined that according to suggested threshold for developing countries by world health organization, all brand INF β products are cost effective in Iran except Avonex. The best strategy among INF β therapies is CBP intramuscular INF β-1a (Cinnovex). Results showed that a policy of encouraging accessibility to CBPs and biosimilars could make even high technology products cost-effective in LMICs. PMID:23800250
A Markov decision process for managing habitat for Florida scrub-jays
Johnson, Fred A.; Breininger, David R.; Duncan, Brean W.; Nichols, James D.; Runge, Michael C.; Williams, B. Ken
2011-01-01
Florida scrub-jays Aphelocoma coerulescens are listed as threatened under the Endangered Species Act due to loss and degradation of scrub habitat. This study concerned the development of an optimal strategy for the restoration and management of scrub habitat at Merritt Island National Wildlife Refuge, which contains one of the few remaining large populations of scrub-jays in Florida. There are documented differences in the reproductive and survival rates of scrubjays among discrete classes of scrub height (<120 cm or "short"; 120-170 cm or "optimal"; .170 cm or "tall"; and a combination of tall and optimal or "mixed"), and our objective was to calculate a state-dependent management strategy that would maximize the long-term growth rate of the resident scrub-jay population. We used aerial imagery with multistate Markov models to estimate annual transition probabilities among the four scrub-height classes under three possible management actions: scrub restoration (mechanical cutting followed by burning), a prescribed burn, or no intervention. A strategy prescribing the optimal management action for management units exhibiting different proportions of scrub-height classes was derived using dynamic programming. Scrub restoration was the optimal management action only in units dominated by mixed and tall scrub, and burning tended to be the optimal action for intermediate levels of short scrub. The optimal action was to do nothing when the amount of short scrub was greater than 30%, because short scrub mostly transitions to optimal height scrub (i.e., that state with the highest demographic success of scrub-jays) in the absence of intervention. Monte Carlo simulation of the optimal policy suggested that some form of management would be required every year. We note, however, that estimates of scrub-height transition probabilities were subject to several sources of uncertainty, and so we explored the management implications of alternative sets of transition probabilities. Generally, our analysis demonstrated the difficulty of managing for a species that requires midsuccessional habitat, and suggests that innovative management tools may be needed to help ensure the persistence of scrub-jays at Merritt Island National Wildlife Refuge. The development of a tailored monitoring program as a component of adaptive management could help reduce uncertainty about controlled and uncontrolled variation in transition probabilities of scrub-height and thus lead to improved decision making.
Regenerative Simulation of Harris Recurrent Markov Chains.
1982-07-01
Sutijle) S. TYPE OF REPORT A PERIOD COVERED REGENERATIVE SIMULATION OF HARRIS RECURRENT Technical Report MARKOV CHAINS 14. PERFORMING ORG. REPORT NUMBER...7 AD-Ag 251 STANFORD UNIV CA DEPT OF OPERATIONS RESEARCH /s i2/ REGENERATIVE SIMULATION OF HARRIS RECURRENT MARKOV CHAINS,(U) JUL 82 P W GLYNN N0001...76-C-0578 UNtLASSIFIED TR-62 NL EhhhIhEEEEEEI EEEEEIIIIIII REGENERATIVE SIMULATION OF HARRIS RECURRENT MARKOV CHAINS by Peter W. Glynn TECHNICAL
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Michaelidis, Constantinos I.; Zimmerman, Richard K.; Nowalk, Mary Patricia; Smith, Kenneth J.
2013-01-01
Objective Invasive pneumococcal disease is a major cause of preventable morbidity and mortality in the United States, particularly among the elderly (>65 years). There are large racial disparities in pneumococcal vaccination rates in this population. Here, we estimate the cost-effectiveness of a hypothetical national vaccination intervention program designed to eliminate racial disparities in pneumococcal vaccination in the elderly. Methods In an exploratory analysis, a Markov decision-analysis model was developed, taking a societal perspective and assuming a 1-year cycle length, 10-year vaccination program duration, and lifetime time horizon. In the base-case analysis, it was conservatively assumed that vaccination program promotion costs were $10 per targeted minority elder per year, regardless of prior vaccination status and resulted in the elderly African American and Hispanic pneumococcal vaccination rate matching the elderly Caucasian vaccination rate (65%) in year 10 of the program. Results The incremental cost-effectiveness of the vaccination program relative to no program was $45,161 per quality-adjusted life-year gained in the base-case analysis. In probabilistic sensitivity analyses, the likelihood of the vaccination program being cost-effective at willingness-to-pay thresholds of $50,000 and $100,000 per quality-adjusted life-year gained was 64% and 100%, respectively. Conclusions In a conservative analysis biased against the vaccination program, a national vaccination intervention program to ameliorate racial disparities in pneumococcal vaccination would be cost-effective. PMID:23538183
NASA Astrophysics Data System (ADS)
Kirchhoff, Michael
2018-03-01
Ramstead MJD, Badcock PB, Friston KJ. Answering Schrödinger's question: A free-energy formulation. Phys Life Rev 2018. https://doi.org/10.1016/j.plrev.2017.09.001 [this issue] motivate a multiscale characterisation of living systems in terms of hierarchically structured Markov blankets - a view of living systems as comprised of Markov blankets of Markov blankets [1-4]. It is effectively a treatment of what life is and how it is realised, cast in terms of how Markov blankets of living systems self-organise via active inference - a corollary of the free energy principle [5-7].
Modeling Hubble Space Telescope flight data by Q-Markov cover identification
NASA Technical Reports Server (NTRS)
Liu, K.; Skelton, R. E.; Sharkey, J. P.
1992-01-01
A state space model for the Hubble Space Telescope under the influence of unknown disturbances in orbit is presented. This model was obtained from flight data by applying the Q-Markov covariance equivalent realization identification algorithm. This state space model guarantees the match of the first Q-Markov parameters and covariance parameters of the Hubble system. The flight data were partitioned into high- and low-frequency components for more efficient Q-Markov cover modeling, to reduce some computational difficulties of the Q-Markov cover algorithm. This identification revealed more than 20 lightly damped modes within the bandwidth of the attitude control system. Comparisons with the analytical (TREETOPS) model are also included.
Optimizing model: insemination, replacement, seasonal production, and cash flow.
DeLorenzo, M A; Spreen, T H; Bryan, G R; Beede, D K; Van Arendonk, J A
1992-03-01
Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States were specified by class of parity (n = 12), production level (n = 15), month of calving (n = 12), month of lactation (n = 16), and days open (n = 7). Methodology optimized decisions based on net present value of an individual cow and all replacements over a 20-yr decision horizon. Length of decision horizon was chosen to ensure that optimal policies were determined for an infinite planning horizon. Optimization took 286 s of central processing unit time. The final probability transition matrix was determined, in part, by the optimal policy. It was estimated iteratively to determine post-optimization steady state herd structure, milk production, replacement, feed inputs and costs, and resulting cash flow on a calendar month and annual basis if optimal policies were implemented. Implementation of the model included seasonal effects on lactation curve shapes, estrus detection rates, pregnancy rates, milk prices, replacement costs, cull prices, and genetic progress. Other inputs included calf values, values of dietary TDN and CP per kilogram, and discount rate. Stochastic elements included conception (and, thus, subsequent freshening), cow milk production level within herd, and survival. Validation of optimized solutions was by separate simulation model, which implemented policies on a simulated herd and also described herd dynamics during transition to optimized structure.
Reinforcement Learning Based Web Service Compositions for Mobile Business
NASA Astrophysics Data System (ADS)
Zhou, Juan; Chen, Shouming
In this paper, we propose a new solution to Reactive Web Service Composition, via molding with Reinforcement Learning, and introducing modified (alterable) QoS variables into the model as elements in the Markov Decision Process tuple. Moreover, we give an example of Reactive-WSC-based mobile banking, to demonstrate the intrinsic capability of the solution in question of obtaining the optimized service composition, characterized by (alterable) target QoS variable sets with optimized values. Consequently, we come to the conclusion that the solution has decent potentials in boosting customer experiences and qualities of services in Web Services, and those in applications in the whole electronic commerce and business sector.
An approximate dynamic programming approach to resource management in multi-cloud scenarios
NASA Astrophysics Data System (ADS)
Pietrabissa, Antonio; Priscoli, Francesco Delli; Di Giorgio, Alessandro; Giuseppi, Alessandro; Panfili, Martina; Suraci, Vincenzo
2017-03-01
The programmability and the virtualisation of network resources are crucial to deploy scalable Information and Communications Technology (ICT) services. The increasing demand of cloud services, mainly devoted to the storage and computing, requires a new functional element, the Cloud Management Broker (CMB), aimed at managing multiple cloud resources to meet the customers' requirements and, simultaneously, to optimise their usage. This paper proposes a multi-cloud resource allocation algorithm that manages the resource requests with the aim of maximising the CMB revenue over time. The algorithm is based on Markov decision process modelling and relies on reinforcement learning techniques to find online an approximate solution.
Control Improvement for Jump-Diffusion Processes with Applications to Finance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baeuerle, Nicole, E-mail: nicole.baeuerle@kit.edu; Rieder, Ulrich, E-mail: ulrich.rieder@uni-ulm.de
2012-02-15
We consider stochastic control problems with jump-diffusion processes and formulate an algorithm which produces, starting from a given admissible control {pi}, a new control with a better value. If no improvement is possible, then {pi} is optimal. Such an algorithm is well-known for discrete-time Markov Decision Problems under the name Howard's policy improvement algorithm. The idea can be traced back to Bellman. Here we show with the help of martingale techniques that such an algorithm can also be formulated for stochastic control problems with jump-diffusion processes. As an application we derive some interesting results in financial portfolio optimization.
Representing and Learning Complex Object Interactions
Zhou, Yilun; Konidaris, George
2017-01-01
We present a framework for representing scenarios with complex object interactions, in which a robot cannot directly interact with the object it wishes to control, but must instead do so via intermediate objects. For example, a robot learning to drive a car can only indirectly change its pose, by rotating the steering wheel. We formalize such complex interactions as chains of Markov decision processes and show how they can be learned and used for control. We describe two systems in which a robot uses learning from demonstration to achieve indirect control: playing a computer game, and using a hot water dispenser to heat a cup of water. PMID:28593181
NASA Astrophysics Data System (ADS)
Malafeyev, O. A.; Nemnyugin, S. A.; Rylow, D.; Kolpak, E. P.; Awasthi, Achal
2017-07-01
The corruption dynamics is analyzed by means of the lattice model which is similar to the three-dimensional Ising model. Agents placed at nodes of the corrupt network periodically choose to perfom or not to perform the act of corruption at gain or loss while making decisions based on the process history. The gain value and its dynamics are defined by means of the Markov stochastic process modelling with parameters established in accordance with the influence of external and individual factors on the agent's gain. The model is formulated algorithmically and is studied by means of the computer simulation. Numerical results are obtained which demonstrate asymptotic behaviour of the corruption network under various conditions.
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (SUN VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
ASSIST - THE ABSTRACT SEMI-MARKOV SPECIFICATION INTERFACE TO THE SURE TOOL PROGRAM (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1994-01-01
ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, is an interface that will enable reliability engineers to accurately design large semi-Markov models. The user describes the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. The abstract language allows efficient description of large, complex systems; a one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. Instead of listing the individual states of the Markov model, reliability engineers can specify the rules governing the behavior of a system, and these are used to automatically generate the model. ASSIST reads an input file describing the failure behavior of a system in an abstract language and generates a Markov model in the format needed for input to SURE, the semi-Markov Unreliability Range Evaluator program, and PAWS/STEM, the Pade Approximation with Scaling program and Scaled Taylor Exponential Matrix. A Markov model consists of a number of system states and transitions between them. Each state in the model represents a possible state of the system in terms of which components have failed, which ones have been removed, etc. Within ASSIST, each state is defined by a state vector, where each element of the vector takes on an integer value within a defined range. An element can represent any meaningful characteristic, such as the number of working components of one type in the system, or the number of faulty components of another type in use. Statements representing transitions between states in the model have three parts: a condition expression, a destination expression, and a rate expression. The first expression is a Boolean expression describing the state space variable values of states for which the transition is valid. The second expression defines the destination state for the transition in terms of state space variable values. The third expression defines the distribution of elapsed time for the transition. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. ASSIST was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR14193) is written in C-language and can be compiled with the VAX C compiler. The standard distribution medium for the VMS version of ASSIST is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun version (LAR14923) is written in ANSI C-language. An ANSI compliant C compiler is required in order to compile this package. The standard distribution medium for the Sun version of ASSIST is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the documentation in PostScript, TeX, and DVI formats are provided on the distribution medium. (The VMS distribution lacks the .DVI format files, however.) ASSIST was developed in 1986 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. SunOS, Sun3, and Sun4 are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kepler Uniform Modeling of KOIs: MCMC Notes for Data Release 25
NASA Technical Reports Server (NTRS)
Hoffman, Kelsey L.; Rowe, Jason F.
2017-01-01
This document describes data products related to the reported planetary parameters and uncertainties for the Kepler Objects of Interest (KOIs) based on a Markov-Chain-Monte-Carlo (MCMC) analysis. Reported parameters, uncertainties and data products can be found at the NASA Exoplanet Archive . The codes used for this data analysis are available on the Github website (Rowe 2016). The relevant paper for details of the calculations is Rowe et al. (2015). The main differences between the model fits discussed here and those in the DR24 catalogue are that the DR25 light curves were used in the analysis, our processing of the MAST light curves took into account different data flags, the number of chains calculated was doubled to 200 000, and the parameters which are reported are based on a damped least-squares fit, instead of the median value from the Markov chain or the chain with the lowest 2 as reported in the past.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
2013-01-01
Background Proper evaluation of new diagnostic tests is required to reduce overutilization and to limit potential negative health effects and costs related to testing. A decision analytic modelling approach may be worthwhile when a diagnostic randomized controlled trial is not feasible. We demonstrate this by assessing the cost-effectiveness of modified transesophageal echocardiography (TEE) compared with manual palpation for the detection of atherosclerosis in the ascending aorta. Methods Based on a previous diagnostic accuracy study, actual Dutch reimbursement data, and evidence from literature we developed a Markov decision analytic model. Cost-effectiveness of modified TEE was assessed for a life time horizon and a health care perspective. Prevalence rates of atherosclerosis were age-dependent and low as well as high rates were applied. Probabilistic sensitivity analysis was applied. Results The model synthesized all available evidence on the risk of stroke in cardiac surgery patients. The modified TEE strategy consistently resulted in more adapted surgical procedures and, hence, a lower risk of stroke and a slightly higher number of life-years. With 10% prevalence of atherosclerosis the incremental cost-effectiveness ratio was €4,651 and €481 per quality-adjusted life year in 55-year-old men and women, respectively. In all patients aged 65 years or older the modified TEE strategy was cost saving and resulted in additional health benefits. Conclusions Decision analytic modelling to assess the cost-effectiveness of a new diagnostic test based on characteristics, costs and effects of the test itself and of the subsequent treatment options is both feasible and valuable. Our case study on modified TEE suggests that it may reduce the risk of stroke in cardiac surgery patients older than 55 years at acceptable cost-effectiveness levels. PMID:23368927
Henderson, Kirsten A; Anand, Madhur; Bauch, Chris T
2013-01-01
Mitigating the negative impacts of declining worldwide forest cover remains a significant socio-ecological challenge, due to the dominant role of human decision-making. Here we use a Markov chain model of land-use dynamics to examine the impact of governance on forest cover in a region. Each land parcel can be either forested or barren (deforested), and landowners decide whether to deforest their parcel according to perceived value (utility). We focus on three governance strategies: yearly incentive for conservation, one-time penalty for deforestation and one-time incentive for reforestation. The incentive and penalty are incorporated into the expected utility of forested land, which decreases the net gain of deforestation. By analyzing the equilibrium and stability of the landscape dynamics, we observe four possible outcomes: a stationary-forested landscape, a stationary-deforested landscape, an unstable landscape fluctuating near the equilibrium, and a cyclic-forested landscape induced by synchronized deforestation. We find that the two incentive-based strategies often result in highly fluctuating forest cover over decadal time scales or longer, and in a few cases, reforestation incentives actually decrease the average forest cover. In contrast, a penalty for deforestation results in the stable persistence of forest cover (generally >30%). The idea that larger conservation incentives will always yield higher and more stable forest cover is not supported in our findings. The decision to deforest is influenced by more than a simple, "rational" cost-benefit analysis: social learning and myopic, stochastic decision-making also have important effects. We conclude that design of incentive programs may need to account for potential counter-productive long-term effects due to behavioural feedbacks.
Henderson, Kirsten A.; Anand, Madhur; Bauch, Chris T.
2013-01-01
Mitigating the negative impacts of declining worldwide forest cover remains a significant socio-ecological challenge, due to the dominant role of human decision-making. Here we use a Markov chain model of land-use dynamics to examine the impact of governance on forest cover in a region. Each land parcel can be either forested or barren (deforested), and landowners decide whether to deforest their parcel according to perceived value (utility). We focus on three governance strategies: yearly incentive for conservation, one-time penalty for deforestation and one-time incentive for reforestation. The incentive and penalty are incorporated into the expected utility of forested land, which decreases the net gain of deforestation. By analyzing the equilibrium and stability of the landscape dynamics, we observe four possible outcomes: a stationary-forested landscape, a stationary-deforested landscape, an unstable landscape fluctuating near the equilibrium, and a cyclic-forested landscape induced by synchronized deforestation. We find that the two incentive-based strategies often result in highly fluctuating forest cover over decadal time scales or longer, and in a few cases, reforestation incentives actually decrease the average forest cover. In contrast, a penalty for deforestation results in the stable persistence of forest cover (generally >30%). The idea that larger conservation incentives will always yield higher and more stable forest cover is not supported in our findings. The decision to deforest is influenced by more than a simple, “rational” cost-benefit analysis: social learning and myopic, stochastic decision-making also have important effects. We conclude that design of incentive programs may need to account for potential counter-productive long-term effects due to behavioural feedbacks. PMID:24204942
Risk assessment by dynamic representation of vulnerability, exploitation, and impact
NASA Astrophysics Data System (ADS)
Cam, Hasan
2015-05-01
Assessing and quantifying cyber risk accurately in real-time is essential to providing security and mission assurance in any system and network. This paper presents a modeling and dynamic analysis approach to assessing cyber risk of a network in real-time by representing dynamically its vulnerabilities, exploitations, and impact using integrated Bayesian network and Markov models. Given the set of vulnerabilities detected by a vulnerability scanner in a network, this paper addresses how its risk can be assessed by estimating in real-time the exploit likelihood and impact of vulnerability exploitation on the network, based on real-time observations and measurements over the network. The dynamic representation of the network in terms of its vulnerabilities, sensor measurements, and observations is constructed dynamically using the integrated Bayesian network and Markov models. The transition rates of outgoing and incoming links of states in hidden Markov models are used in determining exploit likelihood and impact of attacks, whereas emission rates help quantify the attack states of vulnerabilities. Simulation results show the quantification and evolving risk scores over time for individual and aggregated vulnerabilities of a network.
Extreme event statistics in a drifting Markov chain
NASA Astrophysics Data System (ADS)
Kindermann, Farina; Hohmann, Michael; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Widera, Artur
2017-07-01
We analyze extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one-dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare-event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that, for our data, the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories.
Automatic specification of reliability models for fault-tolerant computers
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1993-01-01
The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.
Block-accelerated aggregation multigrid for Markov chains with application to PageRank problems
NASA Astrophysics Data System (ADS)
Shen, Zhao-Li; Huang, Ting-Zhu; Carpentieri, Bruno; Wen, Chun; Gu, Xian-Ming
2018-06-01
Recently, the adaptive algebraic aggregation multigrid method has been proposed for computing stationary distributions of Markov chains. This method updates aggregates on every iterative cycle to keep high accuracies of coarse-level corrections. Accordingly, its fast convergence rate is well guaranteed, but often a large proportion of time is cost by aggregation processes. In this paper, we show that the aggregates on each level in this method can be utilized to transfer the probability equation of that level into a block linear system. Then we propose a Block-Jacobi relaxation that deals with the block system on each level to smooth error. Some theoretical analysis of this technique is presented, meanwhile it is also adapted to solve PageRank problems. The purpose of this technique is to accelerate the adaptive aggregation multigrid method and its variants for solving Markov chains and PageRank problems. It also attempts to shed some light on new solutions for making aggregation processes more cost-effective for aggregation multigrid methods. Numerical experiments are presented to illustrate the effectiveness of this technique.
LECTURES ON GAME THEORY, MARKOV CHAINS, AND RELATED TOPICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, G L
1958-03-01
Notes on nine lectures delivered at Sandin Corporation in August 1957 are given. Part one contains the manuscript of a paper concerning a judging problem. Part two is concerned with finite Markov-chain theory amd discusses regular Markov chains, absorbing Markov chains, the classification of states, application to the Leontief input-output model, and semimartingales. Part three contains notes on game theory and covers matrix games, the effect of psychological attitudes on the outcomes of games, extensive games, amd matrix theory applied to mathematical economics. (auth)
Markov chains: computing limit existence and approximations with DNA.
Cardona, M; Colomer, M A; Conde, J; Miret, J M; Miró, J; Zaragoza, A
2005-09-01
We present two algorithms to perform computations over Markov chains. The first one determines whether the sequence of powers of the transition matrix of a Markov chain converges or not to a limit matrix. If it does converge, the second algorithm enables us to estimate this limit. The combination of these algorithms allows the computation of a limit using DNA computing. In this sense, we have encoded the states and the transition probabilities using strands of DNA for generating paths of the Markov chain.
Of bugs and birds: Markov Chain Monte Carlo for hierarchical modeling in wildlife research
Link, W.A.; Cam, E.; Nichols, J.D.; Cooch, E.G.
2002-01-01
Markov chain Monte Carlo (MCMC) is a statistical innovation that allows researchers to fit far more complex models to data than is feasible using conventional methods. Despite its widespread use in a variety of scientific fields, MCMC appears to be underutilized in wildlife applications. This may be due to a misconception that MCMC requires the adoption of a subjective Bayesian analysis, or perhaps simply to its lack of familiarity among wildlife researchers. We introduce the basic ideas of MCMC and software BUGS (Bayesian inference using Gibbs sampling), stressing that a simple and satisfactory intuition for MCMC does not require extraordinary mathematical sophistication. We illustrate the use of MCMC with an analysis of the association between latent factors governing individual heterogeneity in breeding and survival rates of kittiwakes (Rissa tridactyla). We conclude with a discussion of the importance of individual heterogeneity for understanding population dynamics and designing management plans.
Tracking problem solving by multivariate pattern analysis and Hidden Markov Model algorithms.
Anderson, John R
2012-03-01
Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application involves using fMRI activity to track what students are doing as they solve a sequence of algebra problems. The methodology achieves considerable accuracy at determining both what problem-solving step the students are taking and whether they are performing that step correctly. The second "model discovery" application involves using statistical model evaluation to determine how many substates are involved in performing a step of algebraic problem solving. This research indicates that different steps involve different numbers of substates and these substates are associated with different fluency in algebra problem solving. Copyright © 2011 Elsevier Ltd. All rights reserved.
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-01-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581
Takayasu, Hideki; Takayasu, Misako
2017-01-01
We extend the concept of statistical symmetry as the invariance of a probability distribution under transformation to analyze binary sign time series data of price difference from the foreign exchange market. We model segments of the sign time series as Markov sequences and apply a local hypothesis test to evaluate the symmetries of independence and time reversion in different periods of the market. For the test, we derive the probability of a binary Markov process to generate a given set of number of symbol pairs. Using such analysis, we could not only segment the time series according the different behaviors but also characterize the segments in terms of statistical symmetries. As a particular result, we find that the foreign exchange market is essentially time reversible but this symmetry is broken when there is a strong external influence. PMID:28542208
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
[Analysis and modelling of safety culture in a Mexican hospital by Markov chains].
Velázquez-Martínez, J D; Cruz-Suárez, H; Santos-Reyes, J
2016-01-01
The objective of this study was to analyse and model the safety culture with Markov chains, as well as predicting and/or prioritizing over time the evolutionary behaviour of the safety culture of the health's staff in one Mexican hospital. The Markov chain theory has been employed in the analysis, and the input data has been obtained from a previous study based on the Safety Attitude Questionnaire (CAS-MX-II), by considering the following 6 dimensions: safety climate, teamwork, job satisfaction, recognition of stress, perception of management, and work environment. The results highlighted the predictions and/or prioritisation of the approximate time for the possible integration into the evolutionary behaviour of the safety culture as regards the "slightly agree" (Likert scale) for: safety climate (in 12 years; 24.13%); teamwork (8 years; 34.61%); job satisfaction (11 years; 52.41%); recognition of the level of stress (8 years; 19.35%); and perception of the direction (22 years; 27.87%). The work environment dimension was unable to determine the behaviour of staff information, i.e. no information cultural roots were obtained. In general, it has been shown that there are weaknesses in the safety culture of the hospital, which is an opportunity to suggest changes to the mandatory policies in order to strengthen it. Copyright © 2016 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.
Analyzing Dyadic Sequence Data—Research Questions and Implied Statistical Models
Fuchs, Peter; Nussbeck, Fridtjof W.; Meuwly, Nathalie; Bodenmann, Guy
2017-01-01
The analysis of observational data is often seen as a key approach to understanding dynamics in romantic relationships but also in dyadic systems in general. Statistical models for the analysis of dyadic observational data are not commonly known or applied. In this contribution, selected approaches to dyadic sequence data will be presented with a focus on models that can be applied when sample sizes are of medium size (N = 100 couples or less). Each of the statistical models is motivated by an underlying potential research question, the most important model results are presented and linked to the research question. The following research questions and models are compared with respect to their applicability using a hands on approach: (I) Is there an association between a particular behavior by one and the reaction by the other partner? (Pearson Correlation); (II) Does the behavior of one member trigger an immediate reaction by the other? (aggregated logit models; multi-level approach; basic Markov model); (III) Is there an underlying dyadic process, which might account for the observed behavior? (hidden Markov model); and (IV) Are there latent groups of dyads, which might account for observing different reaction patterns? (mixture Markov; optimal matching). Finally, recommendations for researchers to choose among the different models, issues of data handling, and advises to apply the statistical models in empirical research properly are given (e.g., in a new r-package “DySeq”). PMID:28443037
Distributions-per-level: a means of testing level detectors and models of patch-clamp data.
Schröder, I; Huth, T; Suitchmezian, V; Jarosik, J; Schnell, S; Hansen, U P
2004-01-01
Level or jump detectors generate the reconstructed time series from a noisy record of patch-clamp current. The reconstructed time series is used to create dwell-time histograms for the kinetic analysis of the Markov model of the investigated ion channel. It is shown here that some additional lines in the software of such a detector can provide a powerful new means of patch-clamp analysis. For each current level that can be recognized by the detector, an array is declared. The new software assigns every data point of the original time series to the array that belongs to the actual state of the detector. From the data sets in these arrays distributions-per-level are generated. Simulated and experimental time series analyzed by Hinkley detectors are used to demonstrate the benefits of these distributions-per-level. First, they can serve as a test of the reliability of jump and level detectors. Second, they can reveal beta distributions as resulting from fast gating that would usually be hidden in the overall amplitude histogram. Probably the most valuable feature is that the malfunctions of the Hinkley detectors turn out to depend on the Markov model of the ion channel. Thus, the errors revealed by the distributions-per-level can be used to distinguish between different putative Markov models of the measured time series.
Monahan, M; Ensor, J; Moore, D; Fitzmaurice, D; Jowett, S
2017-08-01
Essentials Correct duration of treatment after a first unprovoked venous thromboembolism (VTE) is unknown. We assessed when restarting anticoagulation was worthwhile based on patient risk of recurrent VTE. When the risk over a one-year period is 17.5%, restarting is cost-effective. However, sensitivity analyses indicate large uncertainty in the estimates. Background Following at least 3 months of anticoagulation therapy after a first unprovoked venous thromboembolism (VTE), there is uncertainty about the duration of therapy. Further anticoagulation therapy reduces the risk of having a potentially fatal recurrent VTE but at the expense of a higher risk of bleeding, which can also be fatal. Objective An economic evaluation sought to estimate the long-term cost-effectiveness of using a decision rule for restarting anticoagulation therapy vs. no extension of therapy in patients based on their risk of a further unprovoked VTE. Methods A Markov patient-level simulation model was developed, which adopted a lifetime time horizon with monthly time cycles and was from a UK National Health Service (NHS)/Personal Social Services (PSS) perspective. Results Base-case model results suggest that treating patients with a predicted 1 year VTE risk of 17.5% or higher may be cost-effective if decision makers are willing to pay up to £20 000 per quality adjusted life year (QALY) gained. However, probabilistic sensitivity analysis shows that the model was highly sensitive to overall parameter uncertainty and caution is warranted in selecting the optimal decision rule on cost-effectiveness grounds. Univariate sensitivity analyses indicate variables such as anticoagulation therapy disutility and mortality risks were very influential in driving model results. Conclusion This represents the first economic model to consider the use of a decision rule for restarting therapy for unprovoked VTE patients. Better data are required to predict long-term bleeding risks during therapy in this patient group. © 2017 International Society on Thrombosis and Haemostasis.
Ma, Ning; Yu, Angela J
2015-01-01
Response time (RT) is an oft-reported behavioral measure in psychological and neurocognitive experiments, but the high level of observed trial-to-trial variability in this measure has often limited its usefulness. Here, we combine computational modeling and psychophysics to examine the hypothesis that fluctuations in this noisy measure reflect dynamic computations in human statistical learning and corresponding cognitive adjustments. We present data from the stop-signal task (SST), in which subjects respond to a go stimulus on each trial, unless instructed not to by a subsequent, infrequently presented stop signal. We model across-trial learning of stop signal frequency, P(stop), and stop-signal onset time, SSD (stop-signal delay), with a Bayesian hidden Markov model, and within-trial decision-making with an optimal stochastic control model. The combined model predicts that RT should increase with both expected P(stop) and SSD. The human behavioral data (n = 20) bear out this prediction, showing P(stop) and SSD both to be significant, independent predictors of RT, with P(stop) being a more prominent predictor in 75% of the subjects, and SSD being more prominent in the remaining 25%. The results demonstrate that humans indeed readily internalize environmental statistics and adjust their cognitive/behavioral strategy accordingly, and that subtle patterns in RT variability can serve as a valuable tool for validating models of statistical learning and decision-making. More broadly, the modeling tools presented in this work can be generalized to a large body of behavioral paradigms, in order to extract insights about cognitive and neural processing from apparently quite noisy behavioral measures. We also discuss how this behaviorally validated model can then be used to conduct model-based analysis of neural data, in order to help identify specific brain areas for representing and encoding key computational quantities in learning and decision-making.
Machine learning in sentiment reconstruction of the simulated stock market
NASA Astrophysics Data System (ADS)
Goykhman, Mikhail; Teimouri, Ali
2018-02-01
In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.
Markov models in dentistry: application to resin-bonded bridges and review of the literature.
Mahl, Dominik; Marinello, Carlo P; Sendi, Pedram
2012-10-01
Markov models are mathematical models that can be used to describe disease progression and evaluate the cost-effectiveness of medical interventions. Markov models allow projecting clinical and economic outcomes into the future and are therefore frequently used to estimate long-term outcomes of medical interventions. The purpose of this paper is to demonstrate its use in dentistry, using the example of resin-bonded bridges to replace missing teeth, and to review the literature. We used literature data and a four-state Markov model to project long-term outcomes of resin-bonded bridges over a time horizon of 60 years. In addition, the literature was searched in PubMed Medline for research articles on the application of Markov models in dentistry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuesong; Liang, Faming; Yu, Beibei
2011-11-09
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less
IoT/M2M wearable-based activity-calorie monitoring and analysis for elders.
Soraya, Sabrina I; Ting-Hui Chiang; Guo-Jing Chan; Yi-Juan Su; Chih-Wei Yi; Yu-Chee Tseng; Yu-Tai Ching
2017-07-01
With the growth of aging population, elder care service has become an important part of the service industry of Internet of Things. Activity monitoring is one of the most important services in the field of the elderly care service. In this paper, we proposed a wearable solution to provide an activity monitoring service on elders for caregivers. The system uses wireless signals to estimate calorie burned by the walking and localization. In addition, it also uses wireless motion sensors to recognize physical activity, such as drinking and restroom activity. Overall, the system can be divided into four parts: wearable device, gateway, cloud server, and caregiver's android application. The algorithms we proposed for drinking activity are Decision Tree (J48) and Random Forest (RF). While for restroom activity, we proposed supervised Reduced Error Pruning (REP) Tree and Variable Order Hidden Markov Model (VOHMM). We developed a prototype service Android app to provide a life log for the recording of the activity sequence which would be useful for the caregiver to monitor elder activity and its calorie consumption.
Markov switching multinomial logit model: An application to accident-injury severities.
Malyshkina, Nataliya V; Mannering, Fred L
2009-07-01
In this study, two-state Markov switching multinomial logit models are proposed for statistical modeling of accident-injury severities. These models assume Markov switching over time between two unobserved states of roadway safety as a means of accounting for potential unobserved heterogeneity. The states are distinct in the sense that in different states accident-severity outcomes are generated by separate multinomial logit processes. To demonstrate the applicability of the approach, two-state Markov switching multinomial logit models are estimated for severity outcomes of accidents occurring on Indiana roads over a four-year time period. Bayesian inference methods and Markov Chain Monte Carlo (MCMC) simulations are used for model estimation. The estimated Markov switching models result in a superior statistical fit relative to the standard (single-state) multinomial logit models for a number of roadway classes and accident types. It is found that the more frequent state of roadway safety is correlated with better weather conditions and that the less frequent state is correlated with adverse weather conditions.
The generalization ability of SVM classification based on Markov sampling.
Xu, Jie; Tang, Yuan Yan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang; Zhang, Baochang
2015-06-01
The previously known works studying the generalization ability of support vector machine classification (SVMC) algorithm are usually based on the assumption of independent and identically distributed samples. In this paper, we go far beyond this classical framework by studying the generalization ability of SVMC based on uniformly ergodic Markov chain (u.e.M.c.) samples. We analyze the excess misclassification error of SVMC based on u.e.M.c. samples, and obtain the optimal learning rate of SVMC for u.e.M.c. We also introduce a new Markov sampling algorithm for SVMC to generate u.e.M.c. samples from given dataset, and present the numerical studies on the learning performance of SVMC based on Markov sampling for benchmark datasets. The numerical studies show that the SVMC based on Markov sampling not only has better generalization ability as the number of training samples are bigger, but also the classifiers based on Markov sampling are sparsity when the size of dataset is bigger with regard to the input dimension.
NASA Astrophysics Data System (ADS)
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.
Dixit, Purushottam D; Dill, Ken A
2018-02-13
Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.
El Yazid Boudaren, Mohamed; Monfrini, Emmanuel; Pieczynski, Wojciech; Aïssani, Amar
2014-11-01
Hidden Markov chains have been shown to be inadequate for data modeling under some complex conditions. In this work, we address the problem of statistical modeling of phenomena involving two heterogeneous system states. Such phenomena may arise in biology or communications, among other fields. Namely, we consider that a sequence of meaningful words is to be searched within a whole observation that also contains arbitrary one-by-one symbols. Moreover, a word may be interrupted at some site to be carried on later. Applying plain hidden Markov chains to such data, while ignoring their specificity, yields unsatisfactory results. The Phasic triplet Markov chain, proposed in this paper, overcomes this difficulty by means of an auxiliary underlying process in accordance with the triplet Markov chains theory. Related Bayesian restoration techniques and parameters estimation procedures according to the new model are then described. Finally, to assess the performance of the proposed model against the conventional hidden Markov chain model, experiments are conducted on synthetic and real data.
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
Soft context clustering for F0 modeling in HMM-based speech synthesis
NASA Astrophysics Data System (ADS)
Khorram, Soheil; Sameti, Hossein; King, Simon
2015-12-01
This paper proposes the use of a new binary decision tree, which we call a soft decision tree, to improve generalization performance compared to the conventional `hard' decision tree method that is used to cluster context-dependent model parameters in statistical parametric speech synthesis. We apply the method to improve the modeling of fundamental frequency, which is an important factor in synthesizing natural-sounding high-quality speech. Conventionally, hard decision tree-clustered hidden Markov models (HMMs) are used, in which each model parameter is assigned to a single leaf node. However, this `divide-and-conquer' approach leads to data sparsity, with the consequence that it suffers from poor generalization, meaning that it is unable to accurately predict parameters for models of unseen contexts: the hard decision tree is a weak function approximator. To alleviate this, we propose the soft decision tree, which is a binary decision tree with soft decisions at the internal nodes. In this soft clustering method, internal nodes select both their children with certain membership degrees; therefore, each node can be viewed as a fuzzy set with a context-dependent membership function. The soft decision tree improves model generalization and provides a superior function approximator because it is able to assign each context to several overlapped leaves. In order to use such a soft decision tree to predict the parameters of the HMM output probability distribution, we derive the smoothest (maximum entropy) distribution which captures all partial first-order moments and a global second-order moment of the training samples. Employing such a soft decision tree architecture with maximum entropy distributions, a novel speech synthesis system is trained using maximum likelihood (ML) parameter re-estimation and synthesis is achieved via maximum output probability parameter generation. In addition, a soft decision tree construction algorithm optimizing a log-likelihood measure is developed. Both subjective and objective evaluations were conducted and indicate a considerable improvement over the conventional method.
NASA Astrophysics Data System (ADS)
Yulmetyev, Renat; Demin, Sergey; Emelyanova, Natalya; Gafarov, Fail; Hänggi, Peter
2003-03-01
In this work we develop a new method of diagnosing the nervous system diseases and a new approach in studying human gait dynamics with the help of the theory of discrete non-Markov random processes (Phys. Rev. E 62 (5) (2000) 6178, Phys. Rev. E 64 (2001) 066132, Phys. Rev. E 65 (2002) 046107, Physica A 303 (2002) 427). The stratification of the phase clouds and the statistical non-Markov effects in the time series of the dynamics of human gait are considered. We carried out the comparative analysis of the data of four age groups of healthy people: children (from 3 to 10 year olds), teenagers (from 11 to 14 year olds), young people (from 21 up to 29 year olds), elderly persons (from 71 to 77 year olds) and Parkinson patients. The full data set are analyzed with the help of the phase portraits of the four dynamic variables, the power spectra of the initial time correlation function and the memory functions of junior orders, the three first points in the spectra of the statistical non-Markov parameter. The received results allow to define the predisposition of the probationers to deflections in the central nervous system caused by Parkinson's disease. We have found out distinct differences between the five submitted groups. On this basis we offer a new method of diagnostics and forecasting Parkinson's disease.
Of goals and habits: age-related and individual differences in goal-directed decision-making.
Eppinger, Ben; Walter, Maik; Heekeren, Hauke R; Li, Shu-Chen
2013-01-01
In this study we investigated age-related and individual differences in habitual (model-free) and goal-directed (model-based) decision-making. Specifically, we were interested in three questions. First, does age affect the balance between model-based and model-free decision mechanisms? Second, are these age-related changes due to age differences in working memory (WM) capacity? Third, can model-based behavior be affected by manipulating the distinctiveness of the reward value of choice options? To answer these questions we used a two-stage Markov decision task in in combination with computational modeling to dissociate model-based and model-free decision mechanisms. To affect model-based behavior in this task we manipulated the distinctiveness of reward probabilities of choice options. The results show age-related deficits in model-based decision-making, which are particularly pronounced if unexpected reward indicates the need for a shift in decision strategy. In this situation younger adults explore the task structure, whereas older adults show perseverative behavior. Consistent with previous findings, these results indicate that older adults have deficits in the representation and updating of expected reward value. We also observed substantial individual differences in model-based behavior. In younger adults high WM capacity is associated with greater model-based behavior and this effect is further elevated when reward probabilities are more distinct. However, in older adults we found no effect of WM capacity. Moreover, age differences in model-based behavior remained statistically significant, even after controlling for WM capacity. Thus, factors other than decline in WM, such as deficits in the in the integration of expected reward value into strategic decisions may contribute to the observed impairments in model-based behavior in older adults.
Of goals and habits: age-related and individual differences in goal-directed decision-making
Eppinger, Ben; Walter, Maik; Heekeren, Hauke R.; Li, Shu-Chen
2013-01-01
In this study we investigated age-related and individual differences in habitual (model-free) and goal-directed (model-based) decision-making. Specifically, we were interested in three questions. First, does age affect the balance between model-based and model-free decision mechanisms? Second, are these age-related changes due to age differences in working memory (WM) capacity? Third, can model-based behavior be affected by manipulating the distinctiveness of the reward value of choice options? To answer these questions we used a two-stage Markov decision task in in combination with computational modeling to dissociate model-based and model-free decision mechanisms. To affect model-based behavior in this task we manipulated the distinctiveness of reward probabilities of choice options. The results show age-related deficits in model-based decision-making, which are particularly pronounced if unexpected reward indicates the need for a shift in decision strategy. In this situation younger adults explore the task structure, whereas older adults show perseverative behavior. Consistent with previous findings, these results indicate that older adults have deficits in the representation and updating of expected reward value. We also observed substantial individual differences in model-based behavior. In younger adults high WM capacity is associated with greater model-based behavior and this effect is further elevated when reward probabilities are more distinct. However, in older adults we found no effect of WM capacity. Moreover, age differences in model-based behavior remained statistically significant, even after controlling for WM capacity. Thus, factors other than decline in WM, such as deficits in the in the integration of expected reward value into strategic decisions may contribute to the observed impairments in model-based behavior in older adults. PMID:24399925